Simulation Boo

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 378

CONSTRUCTION SIMULATION

An Introduction Using SIMPHONY

Simaan M. AbouRizk
Stephen A. Hague
Ronald Ekyalimpa
Hole School of Construction Engineering
Department of Civil and Environmental Engineering
CONSTRUCTION SIMULATION
AN INTRODUCTION USING SIMPHONY

Simaan M. AbouRizk
Stephen A. Hague
Ronald Ekyalimpa

HOLE SCHOOL OF CONSTRUCTION ENGINEERING


DEPARTMENT OF CIVIL & ENVIRONMENTAL ENGINEERING
UNIVERSITY OF ALBERTA
CONSTRUCTION SIMULATION
AN INTRODUCTION USING SIMPHONY

© 2016 by S. AbouRizk

COPYRIGHT NOTICE:

All rights reserved. No part of this book may be reproduced or transmitted in any form or by any
means without written permission from the authors, except in the case of brief quotations
embodied in critical articles or reviews.

For information, contact:


Simaan AbouRizk

Hole School of Construction Engineering


Department of Civil and Environmental Engineering
University of Alberta
Donadeo Innovation Centre for Engineering
7-232, 9211-116 Street
Edmonton, AB T6G 1G9

ISBN: 978-1-55195-357-1

First Edition: April 2016


Contents

Preface xi
Acknowledgements xv
Dedication xvii
1 Introduction to Simulation 1
1.1 Construction Engineering: Context . . . . . . . . . . . . . . . 1
1.2 Engineers Work with Models . . . . . . . . . . . . . . . . . . . 3
1.3 Responsibilities of Construction Engineers . . . . . . . . . . . 8
1.4 Simulation Denitions . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Types of Simulation . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.1 Dynamic Simulation Models . . . . . . . . . . . . . . . 12
1.5.2 Discrete Event Simulation Models . . . . . . . . . . . . 12
1.5.3 Continuous Change Models . . . . . . . . . . . . . . . 14
1.5.4 Static Simulation Models . . . . . . . . . . . . . . . . . 14
1.5.5 Deterministic Simulation Models . . . . . . . . . . . . 14
1.5.6 Stochastic/Monte Carlo Simulation Models . . . . . . . 17
1.5.7 Other Types of Simulation . . . . . . . . . . . . . . . . 17
1.5.8 4-D Modelling and Animations . . . . . . . . . . . . . 17
1.5.9 Agent-Based Modelling . . . . . . . . . . . . . . . . . . 17
1.5.10 System Dynamics . . . . . . . . . . . . . . . . . . . . . 18
1.6 Modelling Systems . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.1 Modelling Dynamic Systems . . . . . . . . . . . . . . . 19
1.6.2 A Simple Truck-Shovel Problem . . . . . . . . . . . . . 20
1.7 Simulation Software . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8 Developing Simulation Models . . . . . . . . . . . . . . . . . . 25
1.9 Applications of Simulation in Construction . . . . . . . . . . . 27

v
vi CONTENTS

2 Review of Statistics 29
2.1 Input Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.1 Identifying an Appropriate Distribution . . . . . . . . . 31
2.1.2 Estimating Distribution Parameters . . . . . . . . . . . 34
2.1.3 Testing for Goodness of Fit . . . . . . . . . . . . . . . 34
2.2 Output Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.1 Developing Point and Interval Estimates . . . . . . . . 36
2.3 Selecting Distributions in Simphony.NET . . . . . . . . . . . . 38
2.4 Example of Input Modelling and Output Analysis . . . . . . . 41

3 Verication and Validation 49


3.1 Simulation Model Verication . . . . . . . . . . . . . . . . . . 49
3.2 Simulation Model Validation . . . . . . . . . . . . . . . . . . . 52
3.2.1 Conceptual Model Validation . . . . . . . . . . . . . . 55
3.2.2 Input Data Validation . . . . . . . . . . . . . . . . . . 55
3.2.3 Operational Validation . . . . . . . . . . . . . . . . . . 56
3.3 Simulation Model Accreditation . . . . . . . . . . . . . . . . . 57

4 Modelling with CYCLONE 59


4.1 A Motivational Example . . . . . . . . . . . . . . . . . . . . . 60
4.2 CYCLONE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.1 Fundamental Concepts . . . . . . . . . . . . . . . . . . 68
4.2.2 A CYCLONE Earthmoving Operation . . . . . . . . . 76
4.3 Hand Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3.1 Hand Simulation with Statistics . . . . . . . . . . . . . 85
4.3.2 Example: A Simple Earthmoving Model . . . . . . . . 85
4.3.3 Example: Riprap Installation . . . . . . . . . . . . . . 93
4.4 North LRT Case Study . . . . . . . . . . . . . . . . . . . . . . 99
4.4.1 Project Description . . . . . . . . . . . . . . . . . . . . 99
4.4.2 Understanding the SEM Process . . . . . . . . . . . . . 100
4.4.3 The North LRT Tunnel . . . . . . . . . . . . . . . . . . 102
4.4.4 CYCLONE Modelling . . . . . . . . . . . . . . . . . . 107
4.4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.4.6 Embellishment One . . . . . . . . . . . . . . . . . . . . 107
4.4.7 Embellishment Two . . . . . . . . . . . . . . . . . . . . 108
CONTENTS vii

5 General Purpose Modelling 111


5.1 Building a Simple Model . . . . . . . . . . . . . . . . . . . . . 112
5.1.1 Example: An Excavation Process . . . . . . . . . . . . 113
5.1.2 Primary Elements . . . . . . . . . . . . . . . . . . . . . 115
5.1.3 Examining Results . . . . . . . . . . . . . . . . . . . . 118
5.2 Hand Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.2.1 Example: A Simple Earthmoving Model . . . . . . . . 124
5.2.2 Example: A Concrete Batch Plant . . . . . . . . . . . 131
5.3 Modelling Production Systems . . . . . . . . . . . . . . . . . . 142
5.3.1 A Tunnelling Problem . . . . . . . . . . . . . . . . . . 144
5.3.2 Resource Elements . . . . . . . . . . . . . . . . . . . . 146
5.3.3 The Train Cycle . . . . . . . . . . . . . . . . . . . . . . 150
5.3.4 Statistic Elements . . . . . . . . . . . . . . . . . . . . . 154
5.3.5 Collecting Train Cycle Time . . . . . . . . . . . . . . . 156
5.3.6 Generate and Consolidate Elements . . . . . . . . . . . 159
5.3.7 Improved Modelling of the TBM . . . . . . . . . . . . 160
5.3.8 Valve and Branch Elements . . . . . . . . . . . . . . . 165
5.3.9 Track Extension and Surveying . . . . . . . . . . . . . 166
5.4 Adding User Written Code to Models . . . . . . . . . . . . . . 169
5.4.1 The Execute Element . . . . . . . . . . . . . . . . . . . 169
5.4.2 Local vs. Global Attributes . . . . . . . . . . . . . . . 170
5.4.3 Collecting Statistics . . . . . . . . . . . . . . . . . . . . 172
5.4.4 Opening and Closing Valves . . . . . . . . . . . . . . . 173
5.4.5 Scheduling Events . . . . . . . . . . . . . . . . . . . . . 174
5.4.6 Capturing and Releasing Resources . . . . . . . . . . . 176
5.4.7 Other Methods . . . . . . . . . . . . . . . . . . . . . . 178

6 Continuous Simulation 181


6.1 Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . 182
6.1.1 A Motivating Example . . . . . . . . . . . . . . . . . . 182
6.1.2 Modelling the Problem Analytically . . . . . . . . . . . 184
6.1.3 Modelling the Problem in Simphony . . . . . . . . . . 185
6.2 Continuous Modelling Elements . . . . . . . . . . . . . . . . . 186
6.2.1 Stock . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.2.2 Source . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.2.3 Sink . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.2.4 Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.2.5 Watch . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
viii CONTENTS

6.2.6 Runge-Kutta-Fehlberg Integration . . . . . . . . . . . . 188


6.3 Example: Water Draining from a Tank . . . . . . . . . . . . . 190
6.4 Example: Chemical Tanks . . . . . . . . . . . . . . . . . . . . 191
6.4.1 Analytical Solution . . . . . . . . . . . . . . . . . . . . 192
6.4.2 Modelled Solution . . . . . . . . . . . . . . . . . . . . . 193
6.5 Example: Sanitary Sewer Handling . . . . . . . . . . . . . . . 194
6.5.1 Solution Overview . . . . . . . . . . . . . . . . . . . . 197
6.5.2 Discrete Event Part . . . . . . . . . . . . . . . . . . . . 198
6.5.3 Continuous Model . . . . . . . . . . . . . . . . . . . . 202
6.5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
6.5.5 Embellishment One . . . . . . . . . . . . . . . . . . . . 205
6.5.6 Embellishment Two . . . . . . . . . . . . . . . . . . . . 205
6.6 Example: Tunnel Construction . . . . . . . . . . . . . . . . . 206
6.6.1 The SA1A Tunnelling Project . . . . . . . . . . . . . . 206
6.6.2 Simulation Modelling Strategy . . . . . . . . . . . . . . 211
6.6.3 Discrete Event Simulation Models . . . . . . . . . . . . 212
6.6.4 Continuous Simulation Models . . . . . . . . . . . . . . 216
6.6.5 Delay Models . . . . . . . . . . . . . . . . . . . . . . . 222
6.6.6 Simulation Model Results . . . . . . . . . . . . . . . . 228
6.6.7 Embellishments to the Base Delay Model . . . . . . . . 232
6.7 Modelling Strategies . . . . . . . . . . . . . . . . . . . . . . . 233
6.7.1 Continuous Activities . . . . . . . . . . . . . . . . . . . 233

7 Statistical Aspects of Simulation 235


7.1 Background to the Monte Carlo Method . . . . . . . . . . . . 236
7.2 Monte Carlo Simulation in Construction . . . . . . . . . . . . 239
7.3 Range Estimating . . . . . . . . . . . . . . . . . . . . . . . . . 241
7.3.1 Shaft Construction Example . . . . . . . . . . . . . . . 242
7.3.2 Tunnel Construction Example . . . . . . . . . . . . . . 244
7.4 Generating Random Numbers . . . . . . . . . . . . . . . . . . 246
7.5 Generating Random Deviates . . . . . . . . . . . . . . . . . . 249
7.5.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . 249
7.5.2 The Inverse Transform Method . . . . . . . . . . . . . 250
7.5.3 The Acceptance/Rejection Method . . . . . . . . . . . 253
7.5.4 The Box-Muller Method . . . . . . . . . . . . . . . . . 255
7.5.5 Other Techniques . . . . . . . . . . . . . . . . . . . . . 256
7.6 Input Modelling for Simulation Studies . . . . . . . . . . . . . 256
7.6.1 Empirical Distributions . . . . . . . . . . . . . . . . . . 256
CONTENTS ix

7.6.2 Selecting a Distribution . . . . . . . . . . . . . . . . . 257


7.6.3 The Method of Moments . . . . . . . . . . . . . . . . . 261
7.6.4 The Method of Maximum Likelihood . . . . . . . . . . 265
7.6.5 The Method of Least Squares . . . . . . . . . . . . . . 268
7.6.6 Testing for Goodness of Fit . . . . . . . . . . . . . . . 269
7.7 Output Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.7.1 Checking for Normality . . . . . . . . . . . . . . . . . . 275
7.7.2 Developing Point and Interval Estimates . . . . . . . . 276
7.8 Example: Equipment Breakdowns . . . . . . . . . . . . . . . . 278
7.8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 278
7.8.2 Problem Description . . . . . . . . . . . . . . . . . . . 279
7.8.3 Modelling Assumptions . . . . . . . . . . . . . . . . . . 280
7.8.4 Modelling Strategy . . . . . . . . . . . . . . . . . . . . 280
7.8.5 Input Modelling . . . . . . . . . . . . . . . . . . . . . . 284
7.8.6 Base Model . . . . . . . . . . . . . . . . . . . . . . . . 285
7.8.7 Embellishment One . . . . . . . . . . . . . . . . . . . . 302
7.8.8 Embellishment Two . . . . . . . . . . . . . . . . . . . . 311

A Simphony.NET User's Guide 319


A.1 Simphony.NET Overview . . . . . . . . . . . . . . . . . . . . . 319
A.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 319
A.1.2 Basic Features . . . . . . . . . . . . . . . . . . . . . . . 320
A.2 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . 321
A.2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . 321
A.2.2 The Main User Interface . . . . . . . . . . . . . . . . . 322
A.3 Developing Simulation Models . . . . . . . . . . . . . . . . . . 325
A.3.1 Dene a Scenario . . . . . . . . . . . . . . . . . . . . . 325
A.3.2 Building a Model . . . . . . . . . . . . . . . . . . . . . 327
A.3.3 Executing a Model . . . . . . . . . . . . . . . . . . . . 331
A.3.4 Examining Results . . . . . . . . . . . . . . . . . . . . 331
x CONTENTS

B Visual Basic Introduction 337


B.1 Trace Output . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
B.2 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
B.3 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
B.4 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
B.5 Data Type Conversions . . . . . . . . . . . . . . . . . . . . . . 342
B.6 Conditional Statements . . . . . . . . . . . . . . . . . . . . . . 343
B.7 Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

C Formula Properties and Methods 347


C.1 Engine and Associated Properties . . . . . . . . . . . . . . . . 347
C.2 Scenario and Associated Properties . . . . . . . . . . . . . . . 347
C.3 Entity and Associated Properties . . . . . . . . . . . . . . . . 348
C.4 Accessing and Manipulating Elements . . . . . . . . . . . . . . 348
C.5 Distribution Sampling . . . . . . . . . . . . . . . . . . . . . . 349
C.6 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
C.7 Requesting and Releasing Resources . . . . . . . . . . . . . . . 350
C.8 Scheduling Events . . . . . . . . . . . . . . . . . . . . . . . . . 351
C.9 Terminating Simulation . . . . . . . . . . . . . . . . . . . . . . 351
C.10 Writing to the Trace Window . . . . . . . . . . . . . . . . . . 351
Preface

Construction Simulation: An Introduction Using Simphony is an introduc-


tory book to process interaction simulation applied to construction engineer-
ing and management. The book covers the principles of discrete event sim-
ulation, continuous simulation, and combined simulation concepts using the
CYCLONE modelling template and the General Purpose Modelling template
of Simphony.NET.
This rst version of the book covers the basic concepts required to de-
velop models in CYCLONE and the General Purpose Modelling templates.
Discrete event, continuous, and combined modelling are discussed with illus-
trations from construction processes. Subsequent versions of this book, the
next due to be released in the Summer of 2016, will include new chapters
on developing Simphony templates, Bayesian updating techniques in sim-
ulation, Markov models, and distributed simulation using the High Level
Architecture.
For this rst version, Chapter 1 is an introduction to simulation, and
explains denitions and dierent types of simulation models, with a focus on
construction applications. This chapter overviews simulation software, and
gives an introductory example.
Chapter 2 describes statistical techniques applicable to the simulation of
repetitive construction operations and presents a practical example applica-
tion to demonstrate how they can be applied. Procedures for selecting input
models, methods for solving for the parameters of selected distributions, and
goodness-of-t testing for construction data are reviewed. The discussions on
output analysis were limited to simulation output that is normal because it
is frequently encountered in construction simulation. Methods for checking
the normality of output data, building condence intervals for various out-
put parameters, and validation of simulation models are also addressed. It

xi
xii PREFACE

provides the basic background in applied statistics that is required to carry


out a Monte Carlo simulation experiment.
Chapter 3 outlines simulation model verication, validation and accredi-
tation. These processes are important to ensure that a model runs properly
and provides accurate output.
Chapter 4 discusses the CYCLONE (CYCLic Operations NEtworks)
modelling language. The chapter outlines how to develop simulation mod-
els using CYCLONE and how to simulate them within Simphony. First,
we cover the graphical modelling elements of CYCLONE and their rules.
Then, we detail developing CYCLONE models, hand simulation, and com-
puter simulation. We conclude with practical applications of construction
processes.
In Chapter 5, we cover another approach for modelling dynamic systems
through the composition of more complex objects that can describe more
details of the underlying system. This approach is referred to as general
purpose modelling. With this approach we can build models for a variety of
applications and varying degrees of complexity.
In Chapter 6, we discuss combined models whereby continuous variables
that change constantly over time are required to be part of a discrete event
model such as the ones we have discussed so far in this book. Since we have
already learned about modelling in discrete events, this chapter will start
with discussions of how we model continuous change in a system.
Chapter 7 discusses statistical aspects of simulation with a major focus on
Monte Carlo simulation. The variability in the real world can be accounted
for in a simulation model using the concept of Monte Carlo simulation. The
Monte Carlo Method is a process that makes use of random numbers and
the principles of statistical sampling to model random processes.
The Simphony.NET User's Guide can be found in Appendix A, and Ap-
pendix B is an introduction to Visual Basic. Appendix C covers Simphony
formula properties and methods.
This book can be used in two ways: those interested in using CYCLONE
can simply read Chapters 1, 2, 4 and 7. Those interested in general purpose
modelling can skip Chapter 4, although learning CYCLONE rst can make
it easier to learn the more involved general purpose modelling.
Simphony was rst developed in 1998 by AbouRizk and Hajjar. Its succes-
sor Simphony.NET was developed by AbouRizk and Hague and continues to
be enhanced and extended at the University of Alberta through Dr. AbouR-
izk's research program. Simphony is a rich modelling environment that is
xiii

composed of simulation services and a modelling user interface. It is based


on modular and hierarchical concepts that provide a medium for deploying
simulation modelling templates. The two modelling templates discussed in
this book are general purpose systems by design, and therefore, can be used
to model any process. Other templates include fuzzy cognitive maps, and
special purpose templates for tunnelling, earthmoving and fabrication plants.
Simphony also provides a medium for fuzzy number processing, matrix ma-
nipulation and statistical analyses. It also facilitates model extensions by
enabling the user to write their own programming code using Visual Basic,
C#, and other .NET programs. Simphony templates can be developed and
deployed using Visual Studio and the Simphony platform.
Simphony can be downloaded from http://development.construction
.ualberta.ca/simphony40/ at no cost for educational and research use.
Commercial use of Simphony can be negotiated with DRAXWare Inc.
xiv PREFACE
Acknowledgements
Two professors have inspired Dr. AbouRizk's work in simulation since the
late 1980s. Dr. Daniel W. Halpin (Professor Emeritus, Purdue University)
and Dr. James R. Wilson (Professor, N.C. State University). Their inu-
ence can be easily traced in each of the chapters covered in this book. Dr.
Halpin developed the CYCLONE system and introduced Dr. AbouRizk to
building CYCLONE models in 1985, and supervised his work in simulation
from then until 1990. He continued to collaborate with the research team
of the University of Alberta on simulation until he retired. Dr. Wilson's
inuence was twofold; rst, in the statistical aspects of simulation (espe-
cially input modelling), and second, by introducing Dr. AbouRizk and Dr.
Halpin to the Winter Simulation Conference. The WSC is a premier venue
for learning about simulation and readers are encouraged to participate in its
annual conference as well as read its proceedings (see www.wintersim.org).
The authors of this book are indebted to both Drs. Halpin and Wilson for
inuencing their thinking and for inspiring them to work in this eld.
The works on simulation reected in this book represent contributions
from numerous students and colleagues. In particular, Dr. Dany Hajjar's
original PhD thesis work that led to early versions of Simphony, and Drs.
Yasser Mohamed, Janaka Ruwanpura, and Brenda McCabe, who have made
signicant contributions to our understanding of how simulation can be ap-
plied in construction.
The examples in various chapters were synthesized from projects com-
pleted by Dr. AbouRizk's students. Credits were given in each chapter. In
addition, Ms. Jangmi Hong, Ms. Sherry Hu, Mr. Estacio Pereira, and Mr.
SeyedReza RazaviAlavi helped produce material for this book as Teaching
Assistants in the simulation course at the University of Alberta.
Ms. Amy Knezevich edited this book and coordinated its productiona
demanding task for which we are grateful.

xv
xvi ACKNOWLEDGEMENTS
Dedication
To our families that continue to support our work without reservations. . .

Dr. AbouRizkMarleine, Hala, Sean, Jenna, Michael, Deema, and Sophia.

Mr. HagueChrista, Daniel, and Kendra.

Dr. EkyalimpaPriscilla, Nathanael, Ethan, and Elisha.

xvii
xviii DEDICATION
Chapter 1
Introduction to Simulation

1.1 Construction Engineering: Context


Engineered facilities, infrastructure designed and built by engineers, can be
thought of as complex systems that combine many interrelated physical
components and various professionals who are involved in the process from
inception to completion. Engineered facilities exhibit numerous challenges
since they are rarely repeatable and are generally built in an open, un-
controlled environment. Examples of engineered facilities include industrial
plants, roads and bridges, commercial buildings, mines, recreational areas,
etc.
In general terms, we can divide the life cycle of an engineered facility into
four phases: initiation, planning, execution, and operation, as demonstrated
in Figure 1.1. As construction engineers, we may be involved in all phases
of the facility life cycle, although more prominently in the execution phase,
and specically in the construction phase. Like other engineering disciplines,
we need tools to represent our ideas and to represent or interact with these
complex systems. Commonly available tools include drawings (models of the
envisioned nal product), specications, contracts and narratives for various
aspects of the work, and organizational tools such as CPM networks.
Consider the project shown in Figure 1.2. A new development (referred to
as NewDev) in the west end of Edmonton is being planned (partially shown
as green area west of 199 Street in the truncated map in Figure 1.2). NewDev
requires civil infrastructure to service the area, including roads, sewers (storm
and sanitary), water, electricity, cable and phone, etc., before lots can be

1
2 CHAPTER 1. INTRODUCTION TO SIMULATION

created and houses can be developed. The entire scope of the development
project is outside the scope of this example, so our scope of work will be
dened solely as providing sanitary servicing to this new development. We
start our work in the planning phase, where numerous options for providing
sanitary servicing to the NewDev are being developed. Options would include
a trunk sewer connecting the NewDev sanitary system to the main trunk
servicing the area, as shown in Figure 1.2 (the main trunk in this case runs
along 170 Street). This could be a gravity line running along 100 Avenue, or a
forced main line using a collection area and a pump station. The methods of
construction vary from open cut for shallow vertical alignments to trenchless
methods for deeper ones.
The execution phase that starts after the concept is complete involves
signicant engineering input and is generally divided into three distinct sub-
phases: design, construction and commissioning. These phases may overlap.
The engineering sub-phase is generally divided into concept design, prelimi-
nary design, and detailed design. The construction phase will have dierent
sub-phases, and many interrelated contracts and work packages. The com-
missioning phase serves to ensure that the facility functions as designed.
The operation phase is the phase where the project owner (municipality or
developer) can operate and use the facility.

Figure 1.1: Life Cycle of an Engineered Facility


1.2. ENGINEERS WORK WITH MODELS 3

1.2 Engineers Work with Models


From the onset of engineering as a discipline, we have used models to ana-
lyze, communicate, test and design our facilities. In the past, physical scaled
models were the norm. These are used for dierent purposes, such as to
model hydraulic ow in a laboratory, to show the building we envision con-
structing and the mechanical components of machines we want to engineer.
Today, most models are built with computers in a virtual world. We use
2-D drawings, 3-D modelling, numerical modelling and process interaction
modelling for various types of situations that we engineer.
The model shown in Figure 1.2 is simply a 2-D drawing superimposed on
a map (a satellite image of the area). It includes illustrations of the vertical
alignment and the construction stages. It is a conceptual model that shows

Figure 1.2: Sample Project Sanitary Servicing for a New Development (SMA
Consulting, n.d.-c)
4 CHAPTER 1. INTRODUCTION TO SIMULATION

the collection point of the sewer system, its end location and its potential
alignments.
In subsequent detailing of the design, the drainage engineer will identify
the network of pipes that will service the area based on the hydraulic models,
and by identifying the manner in which this sewer will be connected into the
main network within the City. The results are described in the form of draw-
ings, as demonstrated in Figures 1.3 and 1.4, and in the form of documents
describing general contract requirements (common to all similar projects),
special contract requirements (unique to this project) and specications to
be followed by the contractor when they build the sewer line.
The drainage engineer uses hydraulic models (as shown in Figure 1.5) to
model the ow of storm water in the envisioned development during the con-
cept design. This is a numerical model, as it simply computes ow through
the various structures using mathematical equations. The drainage engineer
will project that ow, based on rainfall history in the area, the development
itself, the number of houses in the area, etc. The hydraulic model represents
the network of pipe and its capacity, the behaviour of the storm water based
on historical records, and the built area where the storms collect, and then
numerically simulates the ow of storm water on a computer. This will allow
the simulationist (this word is used throughout the book to describe a person
that is developing the simulation model, and as such, may refer to engineers,
managers, analysts and simulation team members) to select the right size
and grade of the new sewer line by inputting the required parameters into
the model. The engineer will also describe the assumptions made, the re-
quirements envisioned, and various issues that need to be addressed, such as
land drainage.
Similar to hydraulic engineers, transportation engineers develop and de-
ploy trac modelling to design roads, trac intersections, etc. For example,
the storm sewer we intend to build requires that a major shaft be constructed
on a major intersection. Transportation engineers will represent the roads,
the trac signals leading to that intersection, and the pattern of trac in
a trac model, as demonstrated in Figure 1.6. They will then be able to
subject this model to changes due to lane closure during construction and
be able to answer questions related to trac build up in the area, for exam-
ple. These models are process interaction models, involving combinations of
event driven simulation, mathematical formulations, and process interaction
simulations.
1.2. ENGINEERS WORK WITH MODELS 5

Figure 1.3: Sample Drawing Showing the Design of a Tunnelling Project


(City of Edmonton, 2011)
6 CHAPTER 1. INTRODUCTION TO SIMULATION

Figure 1.4: Drawing Showing Details of the Selected Tunnel Project (City of
Edmonton, n.d.)
1.2. ENGINEERS WORK WITH MODELS 7

Figure 1.5: Sample Hydraulic Model (SMA Consulting, n.d.-b)

Figure 1.6: Trac Simulation (Qiu, 2015)

Like other engineers, construction engineers can utilize models to help


them describe and build a system. The 3-D model shown in Figure 1.7
8 CHAPTER 1. INTRODUCTION TO SIMULATION

demonstrates the engineer's view of the construction of one portion of the


sewer line given in Figure 1.2 using an open cut method.
Likewise, the simulation model shown in Figure 1.8 is a representation of
the tunnelling process that the construction engineer intends to use in con-
structing the sewer discussed above. Through this model, the simulationist
can determine the progress rate or advancement rate of the tunnelling pro-
cess that is expected throughout the tunnel (e.g., in m/shift); he/she can also
determine the required resources, and when and how they will be deployed
in order to achieve a certain desired performance.

1.3 Responsibilities of Construction Engineers


In construction, we deal with complex facilities that are generally composed
of multiple components, and involve multiple stakeholders, many supply
chains, and a multitude of engineers and professionals. The responsibili-
ties of a construction engineer are numerous and beyond the context of this
textbook (readers should see Halpin and Woodhead (1980) among other read-
ings for further explanation). However, it is important to emphasize specic
responsibilities that are within the context. As construction engineers, we
need to deliver:
A good-quality (optimized) product.

ˆ The product that is being built should match the design and its intent
(which represents the idea envisioned by the owner).

Figure 1.7: Open Cut Portion


1.3. RESPONSIBILITIES OF CONSTRUCTION ENGINEERS 9

Figure 1.8: Construction Model for Tunnel Production Process

ˆ The design and the execution plan should be free from errors (and any
errors should be identied as early in the design process as possible,
since mistakes tend to be more costly to correct later on in the project's
life cycle).

In a well-executed manner. The set of processes and resources we use to


translate an idea into a product is called a project.

ˆ The project required to bring a product to reality should be well


planned (associated with an accurate estimate of cost, time and qual-
ity).

ˆ The construction methods chosen for the project should be feasible and
ecient.
10 CHAPTER 1. INTRODUCTION TO SIMULATION

ˆ The project components should be properly integrated through accu-


rate interface management.

ˆ The most eective choices (construction methods, equipment, etc.) for


building the various components should be adopted.

To deliver a good quality product in a well-executed manner, we need


to be able to develop models that can be used to eectively analyze our
approaches and communicate our thoughts to others. In particular, we need
models that would help us describe:

ˆ the product components to be built,

ˆ the physical aspects of the product,

ˆ the process that we envision for building each component,

ˆ the methods and resources required, when they are involved and how
they combine to complete the work,

ˆ the relative order of the components, and

ˆ the boundaries between components and the external environment.

With advancements in computer software, the above requirements can be


achieved by describing the world in the form of software objects and their
interactions. This is often achieved through simulation modelling languages
like Simphony (AbouRizk & Hajjar, 1998).

1.4 Simulation Denitions


In this book, we are mainly concerned with modelling construction systems
in such a manner that will enable us to study their behaviour, assess how
that behaviour responds to various changes (by experimenting with models),
and manipulate the model so as to achieve optimum performance in the
construction system.
Our desire is to achieve this with no or minimal interference with the real
construction system and in the most cost-eective manner possible.
The Encyclopedia Britannica (2014a) denes computer simulation in the
way that most represents our thinking in this book:
1.4. SIMULATION DEFINITIONS 11

the use of a computer to represent the dynamic responses of


one system by the behaviour of another system modelled after
it. A simulation uses a mathematical description, or model, of
a real system in the form of a computer program. This model is
composed of equations that duplicate the functional relationships
within the real system. When the program is run, the resulting
mathematical dynamics form an analog of the behaviour of the
real system, with the results presented in the form of data.

We use computer simulation to model a system. The Encyclopedia Bri-


tannica (2014b) has the following denition of a system: a system is a
portion of the universe that has been chosen for studying the changes that
take place within it in response to varying conditions.
A useful denition of a model that serves our purpose is as follows (By
Wikipedians (Eds.), n.d.):

a model is anything used in any way to represent anything else.


Some models are physical objects, for instance, a toy model which
may be assembled, and may even be made to work like the object
it represents. In contrast, a conceptual model is a model made
of the composition of concepts that thus exists only in the mind.
Conceptual models are used to help us know, understand, or sim-
ulate the subject matter they represent.

Simulation in the context of this book can be dened as:

the use of computer software (e.g., Simphony) to represent the


dynamic responses of a construction system by the behaviour of
a model made to represent it. A simulation uses mathematical
descriptions, graphical constructs, computer algorithms (as well
as other means) that are generally encapsulated in a simulation
software model to represent the real system.

The construction system is dened as:

any portion of the construction world (i.e., facility, environment,


project, resources, etc.) that has been chosen for studying the
changes that take place within it in response to varying stimuli,
for documenting its dynamic behaviour, or optimizing its perfor-
mance.
12 CHAPTER 1. INTRODUCTION TO SIMULATION

A simulation model is dened as:

a composition of objects (often associated with graphical nota-


tions) that represent an abstraction of the construction system.
The abstraction is generally in the form of concepts that describe
the elements of the system and its behaviour that are relevant to
the model as determined by the simulationist. The collection of
objects is used to help us describe the system, study and under-
stand it and simulate its behaviour.

1.5 Types of Simulation


The denitions above cover many types of simulations that are encountered
in construction. In this section, we discuss some of the more prevalent types
and emphasize those that we will be covering in this book.

1.5.1 Dynamic Simulation Models


Dynamic simulation models are concerned with modelling a system that
changes over time. A model describing the production process of a tun-
nel is an example of a dynamic simulation system. The model would likely
describe how the tunnel boring machine, the construction crew, the train
system and the crane all operate in an interactive manner to produce the
tunnel one meter at a time. As time changes, the model will change, hence
the term dynamic model.

1.5.2 Discrete Event Simulation Models


Discrete event simulation models are a particular type of dynamic simulation
models. These models are processed (computer simulated) by advancing the
time in discrete segments based on important events that take place in the
model. The simulation model generally starts with a given event, which
triggers other events, until a termination point is met. An example of such
a model is the CYCLONE model (see Chapter 4) shown in Figure 1.9.
The model is discrete because when it is processed, it simulates the world
by virtue of tracking various events and their chronological processing, as
demonstrated in Table 1.1.
1.5. TYPES OF SIMULATION 13

Figure 1.9: CYCLONE Model

Table 1.1: Chronological Processing of a CYCLONE Model

Activity Completion Production


Time (Min) Rate
Loaded (Truck No. 1) 5
Loaded (Truck No. 2) 10
Arrived (Truck No. 1) 20
Dumped (Truck No. 1) 23 1/23 = 0.0435
Arrived (Truck No. 2) 25
Dumped (Truck No. 2) 28 2/28 = 0.0870
Returned (Truck No. 1) 33
Loaded (Truck No. 1) 38
Returned (Truck No. 2) 38
Loaded (Truck No. 2) 43
Arrived (Truck No. 1) 53
Dumped (Truck No. 1) 56 3/56 = 0.1304
Arrived (Truck No. 2) 58
Dumped (Truck No. 2) 61 4/61 = 0.1739
Returned (Truck No. 1) 66
14 CHAPTER 1. INTRODUCTION TO SIMULATION

1.5.3 Continuous Change Models


Continuous change models are a type of dynamic model that are processed
by incrementing time in uniform (equal) steps. The model is evaluated at
each of those steps, changes are implemented to aected elements of the
model, and observations are collected until the simulation terminates. The
hydraulic model we showed in Figure 1.5 is an example. Likewise, the model
in Figure 1.10 is a continuous model. The processing of the model is achieved
by solving systems of equations over time. The results of such a model
are shown in Figure 1.11, where the storage level of the tank over time is
estimated based on various changes in the model as time progresses.

1.5.4 Static Simulation Models


Static simulation models do not change over time. In general terms, static
simulation models are formulations of a component of the system where the
model remains the same regardless of the passage of time. Mostly, these
models are useful in construction when Monte Carlo simulation is applied.
For example, a Monte Carlo simulation of a project estimate is a summation
of all the costs on a project, and the risk events that could take place as the
project is executed (with their probabilities and impacts). A Monte Carlo
simulation involves making multiple simulation runs whereby in each run
we randomly sample the individual estimates of the work packages and the
occurrence of risks, and tally the total. We repeat the simulation multiple
times until we have a sample of project cost that we can then use to represent
the uncertainty associated with the project's total cost estimate. A classic
example of a static simulation model is the range estimate carried out for a
project. The cost estimates of the work packages are often represented as
statistical distributions, as shown in Table 1.2. A simulation then entails
repeated runs whereby in each run, we generate random numbers from each
of the distributions, gure out the total cost for the project on that run, and
record it as one observation. We repeat this for many iterations to develop
the cost of the project, as shown in Figure 1.12.

1.5.5 Deterministic Simulation Models


Deterministic simulation models are composed of elements that are all con-
stant and do not change during the simulation. These models are generally
1.5. TYPES OF SIMULATION 15

Figure 1.10: Continuous Model in Simphony

2,000
Storage Tank Level (tb)

1,500

1,000

500

0
0 50 100 150 200 250 300 350
Simulation Time (days)

Figure 1.11: Results of Continuous Model


16 CHAPTER 1. INTRODUCTION TO SIMULATION

Table 1.2: Sample Distributions in a Range Estimate

Work Package Min Max Impact Distribution

Relative Frequency
4%
Work Package A $10,000 $25,000 2%

0%
$10,000 $17,500 $25,000

Cost

Relative Frequency
4%
Work Package B $25,000 $50,000 2%

0%
$25,000 $37,500 $50,000

Cost

Relative Frequency 4%
Work Package C $20,000 $30,000 2%

0%
$20,000 $25,000 $30,000

Cost

6%
Relative Frequency

4%

2%

0%
$65,000 $75,000 $85,000 $95,000
Cost

Figure 1.12: Results of a Monte Carlo Simulation


1.5. TYPES OF SIMULATION 17

not useful for decision making, but can be invaluable for model verication
and debugging.

1.5.6 Stochastic/Monte Carlo Simulation Models


Stochastic simulation models refer to models that incorporate random pro-
cesses during model execution. Monte Carlo based simulation models are
one form of stochastic models in which the simulation objects include prob-
abilistic distributions to model their random nature. Those distributions are
randomly sampled during simulation using Monte Carlo methods as discussed
in Chapter 7.

1.5.7 Other Types of Simulation


Other types of simulation are encountered in construction, but those are
either derivatives of the above general simulation types or beyond the scope
of this book. Some examples of these types of simulation are outlined below:

1.5.8 4-D Modelling and Animations


4-D modelling and animations are models of the real world that are repre-
sented in 3-D graphics, made to show the progression of the construction
project by adding the project schedule (activities, events and time) to the 3-
D model. Such models are very useful for identifying potential interferences
during construction, aiding understanding of the process envisioned for the
project and tracking progress.
The model shown in Figure 1.13 demonstrates how 4-D models can be
used for project tracking and control. In the gure, Autodesk Navisworks is
used to show the 3-D model of the project and colours of various sections
show progress achieved to date. Red is late, green is on track, etc. The same
can be animated over time to see how the project was to be built.

1.5.9 Agent-Based Modelling


Agent-based modelling, also known as ABMS, or agent-based modelling and
simulation, is an approach to modelling systems that are made up of individ-
ual, autonomous agents that interact (Macal & North, 2009, 2011). ABMS
18 CHAPTER 1. INTRODUCTION TO SIMULATION

Figure 1.13: Project Tracking and Control with 4-D Model (SMA Consulting,
n.d.-a)

oers methods to model individual behaviours and how dierent behaviours


aect others (Macal & North, 2011).
Agent-based models typically have three elements: 1) agents, with their
attributes and behaviours, 2) agent relationships  how they interact with
one another, and 3) the environment in which the agents exist (Macal &
North, 2011).
These models are used to model agent behaviour in systems like the sup-
ply chains and consumer markets, etc. (Macal & North, 2009).

1.5.10 System Dynamics


The key components of system dynamics modelling are causal loop diagrams,
balancing and reinforcing loops and behaviour over time.
The basic element of the system structure is the feedback loop. In system
dynamics, feedback loops interact with one another and this creates complex
1.6. MODELLING SYSTEMS 19

behaviour. Reinforcing or positive feedback loops cause growth and balancing


or negative loops slow growth.
Characteristics of system dynamics models are that they show cause and
eect relationships, time-delayed action and non-linear responses.
They are used to analyze industrial, economic, social and environmental
systems of all kinds.

1.6 Modelling Systems


There are many ways to model a system, but one of the most useful ways to
model any system is to rst think about abstracting the important elements,
and then to represent those elements in some fashion using a computer.
Abstracting the system means you have to understand what the problem is,
think of it as a system, draw the boundaries, and then try to break it into
pieces, then take those pieces and put them, in some fashion, into a computer,
using a specic language and system. Some examples of computer systems
used to model are spreadsheets, computer programs, a Simphony General
Purpose model, a Simphony Special Purpose model, an AnyLogic model, a
SLAM system model, etc.
You will typically abstract the elements and then model the system. Mod-
elling any operation that is to be simulated is an art that requires creativity
and a thorough understanding of the operation (tasks involved, the sequence
of these tasks, resources required, i.e., crews, equipment, materials; and the
interaction of all these aspects). It is not likely to have two of the same
models produced by two dierent people; although the problem and even the
solution may be the same, the model is usually unique to the simulationist.

1.6.1 Modelling Dynamic Systems


The simplest form of dynamic systems can be found in everyday queuing ex-
amples. Queuing systems (a branch of operations research) can be visualized
using the graphical arrangement illustrated in Figure 1.14. To create a queu-
ing model of a dynamic system, we need to abstract the system in the form of
the modelling strategy that is shown in Figure 1.14. In other words we have
to simplify the real world problem in such a way that it can be represented
using a model similar to that of Figure 1.14. We try to understand the real
20 CHAPTER 1. INTRODUCTION TO SIMULATION

world system, observe it, collect information about it, then we represent it
(model it).
Illustrated in Figure 1.14 is the simplest form of a queuing system: an
open queue that has customers arriving, being served by servers, and depart-
ing. The box represents the boundaries of the system.
For a period of time T , we measure:

ˆ the number of arrivals A,

ˆ the amount of time the server(s) are busy B , and

ˆ the number of completions C .

From these we can determine the following:

ˆ the arrival rate, λ = A/T ,

ˆ the service rate, µ = C/T ,

ˆ the server utilization, U = B/T , and

ˆ the average service time per customer, S = B/C .

Note that if the queuing system is in a steady state (i.e., the length of the
queue is not varying much), then we must have C ∼ = A and λ ∼ = µ. From

this it follows that U = µS = λS . Finally, if L is the average number of
customers in the system and W is the average time each customer spends in
the system, then Little's law says that L = λW ∼ = µW .
As systems become more complicated, the problems become fairly com-
plicated, and it becomes tedious to solve with the method shown above.
However, we can simulate the problems. Let's look at a simple example of
a queuing system and solve it analytically (as discussed above) and using
simulation.

1.6.2 A Simple Truck-Shovel Problem


You are hired to assist a contractor in analyzing an earth-moving operation.
The operation uses trucks to haul dirt that is excavated by a backhoe and
loaded into the trucks. There could be multiple trucks and backhoes oper-
ating at the same time. The contractor wants to determine why trucks are
waiting and what eect the delay has on productivity of the operation and
1.6. MODELLING SYSTEMS 21

Figure 1.14: Model of Queuing System

the resulting costs. Simulation can be used in this circumstance, allowing us


to create a model and experiment with it to answer dierent questions.
Suppose that when you went to look at this operation, you observed 18
trucks arrive and 15 trucks depart over the course of 1 hour and that the
backhoe was continuously busy throughout this time period. In this case:

ˆ the arrival rate, λ = A/T = 18 trucks / 60 min = 0.30 trucks/min,

ˆ the service rate, µ = C/T = 15 trucks / 60 min = 0.25 trucks/min,

ˆ the server utilization, U = B/T = 60 min / 60 min = 100%, and

ˆ the average service time per truck, S = B/C = 60 min / 15 trucks =


4 min/truck.

A discrete event simulation model constructed using the Simphony Gen-


eral Template model for this problem is shown in Figure 1.15.
The Simphony model outputs data that the simulationist can use to aid
decision making, as shown in Figure 1.16. For a particular element, the
22 CHAPTER 1. INTRODUCTION TO SIMULATION

Figure 1.15: Simphony Model of Simple Truck-Shovel Problem

system outputs a specic type of information. For example, the length of


the queue, the average waiting time, etc. The model can also be used for
experimentationfor example, you could change the model from one server
to two identical servers, and see how that changes the model output.

1.7 Simulation Software


Monolithic simulation is one in which all model components are localized
into one, such that they all execute and terminate at the same time, and
are normally run on one computer. Most Simphony models fall under this
category. This will be the main focus in this book.
Simulation software can be broadly categorized into monolithic and dis-
tributed depending on the architecture of the simulation model.
In distributed simulation, modelling components are separate and au-
tonomous, but interact with each other during execution. At execution, the
dierent components run in parallel but can start and terminate at dierent
times. Each component can be run on a separate computer in dierent loca-
tions. The design of these frameworks is based on High Level Architecture
(HLA) (IEEE, 2000). COSYE (COnstruction SYnthetic Environment) is an
example of such a framework (AbouRizk & Hague, 2009).
We developed a simulation environment called COSYE at the University
of Alberta (AbouRizk & Hague, 2009). COSYE uses the High Level Architec-
ture (IEEE, 2000) as its basis, which is a distributed simulation specication
that allows the simulationist to run the same simulation on multiple parallel
machines. This is useful for large, complex systems. Sometimes a simula-
tion could take days, or weeks, or months to run, depending on the system
you're simulating. When simulation time becomes an issue, parallel and dis-
tributed simulation becomes very useful because you can break the problem
down and let dierent machines solve dierent parts of the problem, but still
1.7. SIMULATION SOFTWARE 23

Figure 1.16: Statistics Report for Simphony Model of Simple Truck-Shovel


Problem
24 CHAPTER 1. INTRODUCTION TO SIMULATION

communicate with one another. This will be covered in detail later in the
book.
A number of dierent commercial and academic simulation software sys-
tems exist today. Some examples of simulation software developed in aca-
demic institutions include:

ˆ Micro CYCLONE (Halpin, 1973),

ˆ STROBOSCOPE (Martinez, 1996),

ˆ Simphony (AbouRizk & Hajjar, 1998),

ˆ ABC (Approximate Bayesian Computation) (Beaumont, Zhang, &


Balding, 2002), and

ˆ SLAM (a hybrid developed at Purdue University, which then became


a fairly well-known commercial system) (Pritsker, O’Reilly, & Laval,
1997).

Some examples of commercial simulation software are:

ˆ ARENA (Rockwell Automation, 2000),

ˆ AnyLogic (The AnyLogic Company, 2000), and

ˆ SIMSCRIPT (CACI Advanced Simulation Lab, 2014).

New software is continually being developed.


The principle of these simulation software systems is basically the same.
These software systems all have provisions for a general purpose template
for modelling processes. Others, like Simphony, provide for the development
and use of special purpose templates. Simphony was developed to allow sim-
ulation tools to be built on the y. The Simphony General Purpose tool
is similar to the other simulation software, but Simphony also allows devel-
opment of systems, called templates, which use icons that closely represent
elements from real-world problems to build simulation models. This makes
it easier and quicker to build the models.

ˆ A template is dened as a collection of abstract modelling elements.


1.8. DEVELOPING SIMULATION MODELS 25

ˆ A Special Purpose Template (SPT) is a collection of modelling elements


that are designed to have a behaviour that is customized to a specic
process. These elements usually have icons that resemble the real world
system they represent.

ˆ A General Purpose Template (GPT) is a collection of high-level ele-


ments that don't necessarily resemble the system in the real world.

1.8 Developing Simulation Models


Development of simulation models needs to follow a systematic process to be
fruitful. A high-level approach that can be followed is given in Figure 1.17.

Figure 1.17: A Schematic Layout of a Typical Simulation Model Development


Process

The development process starts with the identication and abstraction


of the problem and its overall process from the real world. The problem
domain may comprise an entire real world system or a part of the system
e.g., an operation. This entire process is commonly referred to as formalism
of the problem domain or conceptual modelling. It culminates in the pro-
duction of simulation model assumptions, requirements and specications.
Each of these needs to be documented descriptively and/or graphically re-
sulting in the creation of conceptual models. Formal languages exist for the
26 CHAPTER 1. INTRODUCTION TO SIMULATION

representation of simulation model specications and requirements. Each of


these languages has unique rules that guide their use. A popular example
commonly used is the Unied Modelling Language (UML). UML provides a
set of constructs that can be used for creating the necessary documentation
during the formalism process. Examples of these include:

ˆ class diagrams,

ˆ state charts,

ˆ sequence diagrams, and

ˆ activity diagrams.

There are other ways of representing information in conceptual mod-


elling. A common example is the use of ow charts which is more common
in construction engineering and management. Once the conceptual mod-
els are nalized, the next step in the development process involves trans-
lating these concept models into working simulation models (see arrow B
in Figure 1.17). This step requires the simulationist to select an appropri-
ate simulation modelling paradigm, a simulation environment, knowledge of
simulation principles (e.g., discrete event simulation) and know-how of the
simulation environment chosen for deployment of the models. In this book
we describe two dierent environments: CYCLONE in Chapter 4 and the
Simphony General Purpose Modelling template in Chapter 5.
The simulation model developed at this stage is a draft model as it
requires thorough verication and validation prior to its use.
Generally, users of the model build condence in the model gradually
through a process referred to as simulation model accreditation. This process
can be expedited by involving the intended model users in the development
process, its verication and validation.
Simulation model accreditation, verication and validation can also go
hand in hand with documentation of the details of the model, i.e., arrow
C in Figure 1.17. Documentation of a simulation model should include a
description of various aspects of the model. These include 1) model inputs
2) model implementation and logic, and 3) model verication and validation.
The subject of simulation model documentation is covered by Gass (1984).
1.9. APPLICATIONS OF SIMULATION IN CONSTRUCTION 27

1.9 Applications of Simulation in Construction


Simulation is a powerful decision-support technique for construction manage-
ment. Accurate models of construction processes can aid in the development
of better alternatives and optimization (AbouRizk & Mohamed, 2000). Dif-
ferent simulation tools can be applied to construction projects for 1) project
planning, 2) identifying bottlenecks in operations, 3) examining productivity
improvements and optimizing resource utilization and 4) quick comparison of
alternative construction scenarios (Ruwanpura, AbouRizk, Er, & Fernando,
2001). We can use simulation in a variety of applications. During design:

ˆ risk analysis,

ˆ value analysis,

ˆ constructability reviews (scenario-based planning),

ˆ construction plan development,

ˆ budget development, and

ˆ estimating.

During/post construction:

ˆ planning and control,

ˆ continuous improvement, and

ˆ claims and dispute resolution.


28 CHAPTER 1. INTRODUCTION TO SIMULATION
Chapter 2
Review of Statistics
The real world is not static or deterministic. Many events are unpredictable,
and many processes appear to occur in a random manner. For example,
the cycle times of the trucks or shovels in an earthmoving operation, vary
from cycle to cycle. They are rarely the same. Likewise, the service time
for loading a truck varies from load to load. In the real world there are
many variables that dictate the outcomes of such operations, which make
them appear to be random. For example, the truck cycle time may vary
because of the operator, the road conditions, other trac, change in weather,
unexpected mechanical problems, and so forth. While it would be great to
have the simulation model include as many factors that impact the cycle time
as possible, and as much detail of the process as possible in order to provide
more accurate estimates of each service time, or random event, in general it
would be ill-advised to try to capture all these variables and include them
in the model. First, it may not be possible to collect the required input for
such variables to feed into the model and second, the model would be very
large, expensive to build and dicult to manage.
The variability in the real world can be accounted for in a simulation
model, however. We use the concept of Monte Carlo simulation to achieve
this. The Monte Carlo method is a process that makes use of random num-
bers and the principles of statistical sampling to model random processes.
The Monte Carlo method is discussed in more detail in Chapter 7, for the
interested reader.
In this chapter, we simply introduce the Monte Carlo concepts required
to enable us to build credible simulation models. Those models are gener-
ally stochastic in nature. The proper analysis of such models requires: (1)

29
30 CHAPTER 2. REVIEW OF STATISTICS

application of input modelling techniques, (2) simulating the model multiple


times and sampling the required output variable so that we can statistically
analyze them, and (3) verication and validation of the results. This chapter
describes techniques applicable to the simulation of repetitive construction
operations and presents a practical example application to demonstrate how
they can be applied. Procedures for selecting input models, methods for
solving for the parameters of selected distributions, and goodness-of-t test-
ing for construction data are reviewed. The discussions on output analysis
were limited to simulation output that is normal because it is frequently en-
countered in construction simulation. Methods for checking the normality
of output data, building condence intervals for various output parameters,
and validation of simulation models are also addressed.
In summary, this chapter is concerned with reviewing some of the sta-
tistical techniques that can be used in a simulation experiment, particularly
as they apply to analyzing construction processes. It provides the basic
background in applied statistics that is required to carry out a Monte Carlo
simulation experiment. More details of the subject are covered in Chapter 7.

2.1 Input Modelling


In analyzing any real-world problem, the analyzer is often confronted with
the problem of having to collect and model data. Banks et al. (2000) argued
that even if the model structure is valid, if the input data are incorrectly
collected, inappropriately analyzed, or not representative of the environment,
the simulation output data will be misleading and possibly damaging. The
eect of using dierent distribution models on the simulation output for
three construction-operation models is briey addressed by AbouRizk et al.
(1989). They conclude that mean values of system-related parameters that
are typical of the throughput of a system are insensitive to the type of model
used, provided that the statistical input models used have the same mean
value. The mean value of resource-related parameters, on the other hand, was
sensitive to the input distributions used and showed a signicant variation
depending on the properties of the input model used (AbouRizk et al. 1989).
Duration input (the most critical form of input in construction models) to
a simulation experiment in construction is classically approached by tting a
statistical distribution to a collected sample of observations. A simulationist
2.1. INPUT MODELLING 31

can t any of the classical statistical distributions to the sample of observa-


tions. In any case, a check for goodness of t should be performed. This is
often done in the form of statistical goodness-of-t tests like the chi-square
test, the Kolmogbrov-Smirnov (K-S) test, Q-Q plots, and visual inspection of
the quality of the t of the empirical cumulative distribution function (CDF)
and the tted (theoretical) CDF. One can also consider visual inspection of
the theoretical probability density function (PDF) and the histogram of the
sample data. The steps normally followed in input modelling are shown in
the owchart in Figure 2.1.
Fitting a statistical distribution to sample data is often tedious when
done manually. The procedure is made easier when one uses computer soft-
ware in the tting process. A number of commercial packages are available,
for example @RISK (Palisade, 2015). Simphony provides basic services for
tting standard statistical distributions.

2.1.1 Identifying an Appropriate Distribution


Techniques commonly employed in tting distributions to sample observa-
tions found to be useful in construction applications are discussed in this
section.
Given a sample of n observed data points Xi , the procedure of tting
a distribution could be summarized in three basic steps: (1) identifying an
appropriate distribution to use as an input model, (2) solving for the selected
distribution's parameters, and (3) testing for the goodness of the t.
A number of techniques can be used to select a distribution to model
a sample of data. Plotting the frequency histogram often gives an idea of
the shape of the underlying distribution. The problem with this approach,
however, is that a number of histograms can be generated from the same set
of data. In other words, one can distort the histogram in dierent ways, e.g.,
by manipulating the class intervals to give it dierent shapes. A consistent
way of specifying histograms eliminates some of this bias. One approach is
to use Sturges' rule (Sturges, 1926). The rule works as follows. Given n
observations x1 , . . . , xn to be summarized in a histogram, one takes:

k = number of cells = d1 + 3.3 log10 (n)e.


32 CHAPTER 2. REVIEW OF STATISTICS

Start

Collect Data
and Construct
Histogram

Select
a
Distribution

Calculate
the Parameters
of the
Selected
Distribution

Check for
Goodness of
Fit Not acceptable

Acceptable

Stop

Figure 2.1: Input Modelling Steps for a Simulation Experiment


2.1. INPUT MODELLING 33

The rest of the histogram parameters are then given by:

max{xi } − min{xi }
w = width of a cell = ,
Number of cells
low value of rst cell = min{xi }.

Now in order to compare the resulting histogram with a plotted probability


distribution, it is necessary to scale the histogram in such a way that its total
area is equal to 1 (just as the area under the probability density function is
equal to 1). For example, if mj denotes the number of observations in cell j ,
then the scaled value of the cell is:
mj
m0j = .
wn

Using this scaling, the area of the scaled histogram will be:

k k k
X X mj w X w
wm0j =w = mj = n = 1,
j=1 j=1
wn wn j=1
wn

as desired.
This guideline will usually reveal the general layout of the data. A good
practice for selecting distributions is to identify a family of distributions for
use as an input model. Guidelines for selecting such a family are presented
by Wilson (1989) and can be summarized as follows:

1. The family should be exible, i.e., capable of assuming a wide variety


of shapes.

2. It should have tractable parameters that are intuitively and physically


meaningful and easy to estimate from the sample data.

3. It should allow for feasible variate generation (fast, exact, and accu-
rate).

Also, one should consider what the requirements or limitations of the simu-
lation language stipulate regarding generation of variates.
34 CHAPTER 2. REVIEW OF STATISTICS

2.1.2 Estimating Distribution Parameters


Estimation of the parameters of a particular distribution is controlled by
data availability. When data are available, one can use moment matching,
percentile matching, maximum likelihood, least squares, or other techniques
to arrive at estimates for the parameters of the underlying distribution. Since
dierent techniques often give dierent parameter estimates, the simulation-
ist is encouraged to use all tting methods available within the software being
used and select the parameters that produce the best t.
In this chapter we assume that the reader will use software programs such
as Simphony to t distributions, and as such, the mathematical formulations
of those techniques are not covered here. Those approaches are discussed in
Chapter 7, however. In the absence of data, a simulationist can use subjec-
tive estimates of some parameters of the underlying distribution. This can
be in the form of the likely mean value of the process (for deterministic anal-
ysis), low and high values to t a uniform distribution, or, when possible,
estimates of the low, high, and most likely values to t a triangular distri-
bution. Programs have made this process even easier by allowing interactive
tting mechanisms for subjective evaluation of certain families, e.g., VISIFIT
(DeBrota, Dittus, Roberts, & Wilson, 1989) to t a Johnson SB (bounded
system) system subjectively. Simphony also provides basic means of tting
distributions when parameters are known to the user.

2.1.3 Testing for Goodness of Fit


Having estimated the parameters of a distribution, one should check for the
goodness of t by comparing the tted distribution to the empirical distri-
bution and assessing the quality of the t obtained. Usually one performs
the goodness-of-t test by using statistical tests, like the chi-square or the
Kolmogorov-Smirnov tests, or by visually assessing the quality of the t.
Conducting the two statistical tests mentioned previously is covered in
most statistics texts. It is to be noted, however, that the chi-square test was
derived for the case of estimating parameters by the method of maximum
likelihood. The test can still be applied in other cases, but its results are
only approximate. The K-S test checks whether the empirical data have
originated from a theoretical distribution with the estimated parameters and
should be used accordingly. A good treatment of both tests can be found in
2.2. OUTPUT ANALYSIS 35

Law and Kelton (1991), Fishman (1977), and other simulation and statistics
texts.
Visual assessment of the quality of the t is obtained by comparing plots
of the tted and empirical CDFs. This is usually done by applying common
sense rather than scientic analysis. Visual assessment of the quality of the
t, however, proves in many cases to be as powerful as any other test and
is usually applied in conjunction with the statistical tests. One could also
consider the t of the PDF to the histogram. This should not be taken as
conclusive, however, since the t cannot be nally judged unless one looks
at the CDF.
Testing for goodness of t using statistical tests is made easier when sta-
tistical tests are incorporated into the tting software. Fitting a distribution
to a data sample is both an art and a science. Using a exible family of distri-
butions is encouraged if the simulation software supports variate generation
from such families.

2.2 Output Analysis


After a proper input analysis of duration data for the operation under in-
vestigation, the simulation experiment can proceed. For simulation models
in which all aspects are deterministic, one simulation run is sucient to
determine the output. A stochastic simulation on the other hand will not
produce the same output when run repeatedly with independent random
seeds. This requires one to make a number of runs with independent seeds
for the random-number-generating streams to ensure that a true picture of
the system under investigation is provided. Typically, a simulationist collects
sample output from the various runs conducted and then uses the sample as
the basis for decision-making.
Law and McComas (1986) point out that one of the most common (and
potentially dangerous) practices in simulating manufacturing systems is that
of making only one run of a stochastic simulation. Making decisions based
on a stochastic simulation of a system with one replication can be costly.
The problem is often aggravated when the system is associated with high
degrees of variability, examples of which occur when modelling construction-
equipment breakdowns.
A typical analysis of simulation output usually includes determination of
the following: (1) whether the simulation is deterministic or stochastic, and
36 CHAPTER 2. REVIEW OF STATISTICS

(2) whether the simulation reects a static, transient, or steady state. The
following discussion of output analysis is specic to that range of simulation
models that can be classied as transient simulations. Wilson (1984) denes
transient simulation as follows: a simulation is transient if the modelling
objective is to estimate parameters of a time-dependent output distribution
over some portion of a nite time horizon for a given set of initial conditions.
Most construction operations would be covered by this denition.
Wilson (1984) categorized the analysis of transient simulation by whether
or not normal distribution theory can be applied to the analysis. Two types
of analysis are relevant: (1) analysis of output parameters that do not signif-
icantly deviate from normality, and (2) analysis of output parameters that
have non-normal responses. Case 2 has not been frequently encountered in
simulation of construction processes. An extensive treatment of the analysis
of output data can be found in Welch (1983).

2.2.1 Developing Point and Interval Estimates


Estimating the Mean
Given that the output data are normally (or approximately normally) dis-
tributed, normal theory may be used to construct the condence interval
around the mean of the sample data. The unbiased estimator of the mean
µ, for a sample of the normal population {x1 , . . . , xn } is the sample mean X̄ ,
calculated as follows:
n
1X
X̄ = xi .
n i=1

An exact 100(1 − α)% condence interval is then:

S
X̄ ± t(1−α/2),(n−1) √ ,
n

where t(1−α/2),(n−1) corresponds to the upper (1−α/2) point of the Student's t-


distribution with n−1 degrees of freedom and S denotes the sample standard
deviation.
2.2. OUTPUT ANALYSIS 37

Estimating the Variance


An unbiased estimator of the variance σ 2 is obtained from the variance of
the sample S 2 , which is given by:
n
2 1 X
S = (xi − X̄)2 .
n − 1 i=1

The 100(1 − α)% condence interval for σ 2 (Wilson, 1984) is then:


" #
(n − 1)S 2 (n − 1)S 2
, .
χ2(1−α/2),(n−1) χ2(α/2),(n−1)

where χ2(1−α/2),(n−1) corresponds to the upper (1 − α/2) point of the χ2 -


distribution with n − 1 degrees of freedom.

Estimating Probabilities
The probability of completing a job on time is also very valuable in a num-
ber of situations. A classic example would be the simulation of scheduling
networks (e.g., PERT type) in an attempt to determine the probability of
meeting a target date.
The cumulative distribution function FX of an output parameter X tells
us the probability that X does not exceed a particular xed value x:

FX (x) = Pr{X ≤ x} x ∈ R.

Assuming that X is normally distributed (or approximately normal), the


cumulative distribution function can be estimated by:
 
x − X̄
FX (x) = Φ ,
S
where Φ denotes the cumulative distribution function (CDF) of the standard
normal distribution.
An approximate 100(1 − α)% condence interval for the probability is
given by:
   s  2
x − X̄ z(1−α/2) x − X̄ 1 x − x̄
Φ ± √ φ 1+ ,
S n S 2 S
38 CHAPTER 2. REVIEW OF STATISTICS

where φ denotes the probability density function (PDF) of the standard


normal distribution.
Another way of estimating the probability (or any arbitrary quantile) is
by directly referring to the empirical CDF that results from the sample being
analyzed. This, however, produces only a point estimate of the probability
and not a condence interval.

2.3 Selecting Distributions in Simphony.NET


Simphony.NET facilitates tting distributions to sample data such as the
ones given in Tables 2 later in this section. As seen in Figure 2.2, the user
can select dierent types of distributions from a list of most popular distribu-
tions for specifying the duration of a given task. By selecting a distribution,
the corresponding parameters are displayed and the user can then specify
values for them or select a le where data is stored and have Simphony t a
distribution.
Simphony is capable of analyzing the collected input data. Using three
dierent techniques (i.e., moment matching, maximum likelihood, and least
squares); the reader may refer to Chapter 7 for details on how the distribution
tting methods work). In order to test for goodness of t, chi-square and the
K-S tests are performed. Then, an appropriate distribution can be selected.
The steps for tting distributions and using the result as duration of the
activities are as follows:

1. Create a .CSV le in which the data are stored in one column.

2. Drag a Task element onto the modelling surface.

3. Select the duration property of the Task element.

4. Select Distribution Fitting from the View menu (Figure 2.3).

5. Click the Fit button and nd and select the .CSV le for importing.

6. Choosing the tting method and desirable distribution based on the


results of the K-S or chi-squared tests (Figure 2.4).

After selecting the desired distribution, the corresponding parameters and


PDF, CDF and Q-Q plot can be seen (Figure 2.5).
2.3. SELECTING DISTRIBUTIONS IN SIMPHONY.NET 39

Figure 2.2: Selecting a Distribution

Figure 2.3: Distribution Fitting


40 CHAPTER 2. REVIEW OF STATISTICS

Figure 2.4: Fitting Results

Figure 2.5: Distribution Editor


2.4. EXAMPLE OF INPUT MODELLING AND OUTPUT ANALYSIS 41

2.4 Example of Input Modelling and Output


Analysis
A simulation of an earth-moving operation will be used as a practical applica-
tion. The Simphony.NET model of the operation is shown in Figure 2.6. The
model itself is not relevant to the reader at this point as we are only inter-
ested in the input data and output analysis from the model. The reader can
come back to this example after reading Chapter 5, if interested in recreating
it and experimenting with it.
The activities that form the model and their durations are presented in
Table 2.1. The shovel's down time and repair time were recorded in pre-
vious earth moving projects and are presented in Table 2.2 and Table 2.3
respectively. For the purpose of illustration, the simulation is carried out for
delivering 20, 000 tons of dirt.
For tting a distribution to the given data, Simphony.NET is utilized.
The data (which is shown in the tables) should be stored in a .CSV le. Then,
the Distribution Fitting function in Simphony is used to t a distribution
to them. In this example, Exponential and Log Normal distribution were
selected as the best tted distribution for modelling the duration of the tasks,
namely Mean Time Between Breakdown, and Time to Repair, as shown
in Figure 2.7 and Figure 2.8, respectively.
In this illustration, the simulation experiment was repeated 30 times using
the multiple-run feature of Simphony. Each run was independently seeded
in order to attain a random independent sample of the output parameters
of concern. Furthermore, the simulation was controlled by the tons of dirt
allowed to complete before simulation terminated (20, 000 tons). The exper-
iment resulted in 30 observations for each of the output parameters under
investigation (e.g., total processing time, production per hour, etc.). In this
example, the parameter of the interest is the total time required to complete
the job. The observations are summarized in Table 2.4.
The choice of making 30 runs was arbitrary for the purpose of illustration.
In a real application, the experimenter should derive the number of required
runs based on the output parameter considered. In general terms, the larger
the number of runs, the more accurate the results would be for two main
reasons: (1) in a small number of runs, the properties of the system may not
be completely revealed, and (2) the condence intervals for the mean and
variance of the results shrink as the number of runs increases. Formal statis-
42 CHAPTER 2. REVIEW OF STATISTICS

Figure 2.6: Model of an Earthmoving Operation

Table 2.1: Activities and Durations

Activity Duration (min)


Shovel loading a 20 ton truck Beta (1.5, 3, 5, 7)
Truck hauling Beta (3, 2, 35, 50)
Truck dumping Triangular (1, 3, 2)
Truck Return Beta (2, 2, 40, 30)
2.4. EXAMPLE OF INPUT MODELLING AND OUTPUT ANALYSIS 43

Table 2.2: Time Between Breakdowns (min)

60 2,185 243 991 1,220 1,959 880 1,830 323


3,020 1,290 1,015 1,605 852 325 910 4,375 5,730
330 2,130 50 2,860 4,520 675 115 7,595 270
7,935 890 1,200 30 3,350 470 958 3,360 146
885 1,365 195 2,020 2,670 31 1,950 2,130 1,157
370 209 67 1,855 2,195 3,885 2,190 2,079 1,710
730 1,160 1,000 240 1,043 290 3,315 2,385 202
2,925 245 1,005 500 760 630 7,530 1,110 675
1,740 853 845 835 1,625 11,010 240 3,335 1,620
1,440 3,030 1,140 2,605 540 5,374 1,865 766 2,350
2,860 175 1,470 1,170 1,560 1,725 41 241 5,305
94 2,390 4,185 865 545 240 370 1,200 1,009
415 945 675 6,454 35 1,420 2,195 1,674 231
9,950 976 1,130 7,410 2,895 1,159 15,000 60 1,540
3,486 7,968 6,195 5,194 2,290 2,715 552 982 1,525
1,112 1,440 3,810 976 1,350 3,958 5,280 3,000 54
2,985 2,035 2,784 340 1,050 1,495 1,195 1,873 1,185
3,180 5,194 2,995 120 1,129 3,990 1,140 2,700 4,359
1,334 1,710 4,200 21 156 1,195 555 1,300 6,855
1,120 2,406 1,575 375 155 359 1,923 135 720
1,560 2,081 141 3,535 1,305 530 268 4,740 1,536
625 1,855 385 2,182 2,264 126 287 200 50
600 2,325 2,595 4,205 830 1,013 675 20 17,368
2,170 2,890 2,745 1,365 5,760 210 1,569 4,600 15,69
1,820 1,055 227 320 340 780 175 4,361 2,045
1,310 355 1,433 1,083 895 770 240 1,860 419
2,835 290 3,825 1,422 3,030 315 5,606 49 492
255 3,390 46 21 70 1,879 2,530 5,029 6,961
479 3,285 1,890 1,941 2,213 3,522 136 10,565 2,789
160 710 1,790 3,600 770 1,006 878  
44 CHAPTER 2. REVIEW OF STATISTICS

Table 2.3: Time To Repair (min)

10 40 45 376 79 29 15 20 33
244 15 37 195 34 15 25 170 45
60 5 15 20 10 15 55 350 99
74 60 30 10 30 55 13 60 145
15 510 20 40 20 30 114 120 10
330 59 66 559 815 10 50 15 377
555 19 20 10 41 5 62 25 30
85 30 120 10 65 36 570 30 58
92 143 36 25 72 20 25 567 35
390 93 30 15 242 30 15 20 30
20 118 300 32 29 60 40 169 20
61 75 10 185 60 90 55 116 19
10 36 63 508 30 60 30 60 230
542 112 10 75 50 342 25 15 15
39 1079 100 60 130 75 10 22 25
636 35 45 6 30 160 15 75 53
591 30 898 25 120 25 45 52 30
85 94 94 15 38 20 214 30 535
133 466 25 20 155 15 21 60 639
100 106 15 15 40 20 29 10 630
50 62 140 180 105 124 15 27 91
40 89 65 104 449 125 75 30 49
60 20 25 5 153 32 15 19 123
852 32 20 104 11 30 30 22 30
24 30 10 116 20 79 20 60 298
110 10 5 5 8 15 35 20 40
179 45 69 7 567 180 20 20 49
140 45 138 45 20 69 110 20 429
151 478 20 1000 9 10 15 117 10
20 10 709 30 15 43 37 88 
2.4. EXAMPLE OF INPUT MODELLING AND OUTPUT ANALYSIS 45

Figure 2.7: Exponential Distribution Fitted to Breakdown Data

Figure 2.8: Log-normal Distribution Fitted to Repair Data


46 CHAPTER 2. REVIEW OF STATISTICS

tical derivation can be followed to determine the number of runs required to


predict a certain output parameter within some degree of condence. Cov-
erage of this, however, is not within the scope of this example. In subjective
terms, the number of runs should not be less than 10. Schmeiser (1982)
suggests the use of 10 to 30 runs to predict the mean for steady-state simu-
lation. This guideline is often adequate for inferring the underlying mean in
transient simulation as well.
For our results (Table 2.4), we calculate the mean and variance as follows:
n 30
1X 1 X
X̄ = xi = xi ≈ 18, 417.48
n i=1 30 i=1
n 30
1 X 1 X
S2 = (xi − X̄)2 ≈ (xi − 18, 417.48)2 ≈ 336, 029.42.
n − 1 i=1 29 i=1

To calculate a 95% condence interval around the mean, we rst need to


determine α such that 100(1 − α)% = 95%. In this case, α = 0.05, so the
condence interval is:
r
S 336, 029.42
X̄ ± t(1−α/2),(n−1) √ ≈ 18, 417.48 ± t1−0.025,29
n 30
≈ 18, 417.48 ± 2.0452 × 105.8347
≈ 18, 417.48 ± 216.45,
where t1−0.025,29 = 2.0452 is the value retrieved from a table of t-distribution
quantiles (available in most statistics texts).
The dangers of inappropriate analysis of the simulation output can be
easily demonstrated by referring to the earthmoving example. Consider, for
example, making only one simulation run and using the results to make the
decision. A brief look at Table 2.4, which summarized the results of 30 runs,
shows that the simulation experiment could have predicted the total time as
high as 20, 000.494 minutes or as low as 17, 477.447 minutes, depending on
what initializing seeds were specied. None of the numbers would appear to
be correct since they are merely observations from the underlying population.
Using any of them can lead to costly decisions since the estimates can be
overly optimistic or overly pessimistic depending on the numbers obtained.
A correct analysis should be based on multiple runs and development of
condence intervals for any estimate under investigation.
2.4. EXAMPLE OF INPUT MODELLING AND OUTPUT ANALYSIS 47

Table 2.4: Simulation Results for 30 Runs

Run No. Total Time Run No. Total Time


1 17,994.682 16 18,599.365
2 18,760.307 17 19,525.843
3 17,790.226 18 18,647.662
4 20,000.494 19 18,407.110
5 18,352.422 20 17,477.447
6 18,677.752 21 18,358.096
7 18,107.679 22 18,241.668
8 19,370.378 23 17,871.019
9 17,691.713 24 18,478.802
10 18,388.122 25 18,653.314
11 18,504.770 26 17,664.908
12 18,221.743 27 17,836.342
13 18,271.358 28 18,783.386
14 18,074.536 29 19,093.670
15 17,764.122 30 18,915.379
48 CHAPTER 2. REVIEW OF STATISTICS
Chapter 3
Verication and Validation
Simulation models are generally developed for decision support. For exam-
ple, a simulationist can be checking if a particular process is feasible, he/she
can be determining the costs or schedule of a project or looking for improve-
ment in the process. It is, therefore, imperative that the simulation model
captures the intended real world correctly and that its results are valid. That
way the decision maker can rely on them to make their decisions. Creating
correct models and getting users and other simulationists to have condence
in these models requires the systematic use of the model development process
described in Chapter 1 simply because when creating simulation models a
number of errors can arise. According to Whitner and Balci (1989), these
errors will typically arise from one or more of the following sources:
ˆ input data,
ˆ the conceptual model,
ˆ the simulation model (its implementation), and
ˆ the simulation model development environment.
These errors can be identied and eliminated through a simulation model
verication and validation process.

3.1 Simulation Model Verication


Simulation model verication is a process followed to conrm the correctness
of model implementation. To complete this, Sargent (2007) suggests: 1) spec-

49
50 CHAPTER 3. VERIFICATION AND VALIDATION

ication verication to assure that the implementation of the specications


on the specied simulation environment is satisfactory. 2) Implementation
verication to assure that the simulation model has been implemented ac-
cording to the simulation model specication.
Specication verication has a lot to do with conceptual modelling. In
other words, conceptual verication tries to answer the question of whether
it is appropriate to use a UML class diagram, an activity diagram or a state
chart in the formalism and documentation of a specic phenomenon that is
to be modelled. It also conrms whether the selected option has been created
in line with the rules of the modelling language.
The main goal of simulation model implementation verication is to trap
errors that arise in the process of building the model. Examples of these
errors include:

ˆ logical errors,

ˆ syntax errors,

ˆ data errors,

ˆ experimental errors, and

ˆ bugs within the simulation environment.

Here are classic examples of such errors:

ˆ Logical errors: Errors arising from the simulationist implementing an


incorrect logic in their model are referred to as logical errors. This type
of error can take the form of incorrect program control structures, in-
appropriate use of counters or termination criteria for loops,etc., used
in code snippets which are embedded within a simulation model. These
often result in undesirable results and model behaviour, e.g., innite
loops and deadlock situations. From a process interaction modelling
perspective, logical errors can also arise from inappropriately sequenc-
ing and connections of modelling elements using directional arrows.

ˆ Programming errors: Simulationists using simulation environments


that can be extended (e.g., Simphony General Purpose modelling tem-
plate) and who are not procient in programming can cause errors
by virtue of breaking the rules of the programming language they are
3.1. SIMULATION MODEL VERIFICATION 51

using, resulting in syntax errors in the code snippets they are trying
to embed into their models. Examples of this type of error include
wrong declarations, incorrect conversion of types, inappropriately or-
dering the values for the parameters for statistical distributions, etc.
In most cases, these types of errors will be trapped by the simulation
environment as the model is run or during development.

ˆ Data errors: Considering a simulation modelling perspective, these


arise when the wrong statistical distributions are used to model a given
phenomenon or the wrong parameter values used for distributions. An
example of an input data error cause is if the simulationist is using a
normal distribution to model activity duration on the basis that it was
obtained as a best t for their data.

ˆ Experimental errors: These are encountered after the development of


the model is complete and it is being put to use. The most common
type of experimental error arises from the failure to perform multiple
simulation runs for a stochastic model. Also, experimental errors arise
when simulationists fail to seed their simulation models for the purposes
of comparing scenarios.

ˆ Simulation system errors: Most simulation environments provide nu-


merous services to simulationists. These may include core and mod-
elling services dened within the simulation engine, e.g., General and
Special Purpose templates, algorithms dened within math libraries,
and core simulation services such as calendars, resource manipulations,
and simulation event manipulations, etc. Incorrect implementation of
these services by the developer(s) results in errors/bugs. There is a
need for simulationists to verify the behaviour of specic aspects of a
simulation environment prior to using its services.

There are several ways that simulationists can verify that their models
are working as intended. Examples of these include:

ˆ Animation/visualization of the simulation. Visualization is a neat fea-


ture that some simulation environments provide. Graphically display-
ing details of a simulation model as events unfold is extremely valuable
in verifying simulation models. Visualizing simulation models is an
eective way of trapping logical errors.
52 CHAPTER 3. VERIFICATION AND VALIDATION

ˆ The use of trace logs. Most simulation environments provide features


that facilitate tracing information as the model executes. Simulation
model verication can be performed by writing details of simulation
events to trace logs and scrutinizing them. The existence of logical
errors can be checked by examining the order in which these events are
traced and their time stamps.

ˆ The use of entity counters. Another way to check for the presence of
logical errors is through the use of counters in the model to track the
ow of entities as simulation events evolve.

ˆ Performing unit tests. Unit tests are a popular way to conrm whether
a newly introduced algorithm was implemented correctly into a simula-
tion environment, and performs well. The typical approach is to create
a model that works and whose results are veried. When new pieces
are introduced into the model (e.g., new user written code, new models,
etc.), the unit test is run. If the results are the same as expected then
the model did not introduce new errors.

ˆ Integrity checks provided by the simulation environment. It is highly


advisable for simulationists to check their models for integrity prior
to running them. Robust simulation environments, such as Simphony,
provide a feature for checking model integrity. The Simphony simula-
tion system reports logical errors and data errors that can be identied
prior to model execution. Messages displayed guide the simulationist
on what they need to do to x these errors.

3.2 Simulation Model Validation


A computer simulation model is built to provide a virtual environment that is
an accurate and credible replica of a real world system. This virtual environ-
ment provides an alternative that simulationists can experiment on without
disrupting the real world system. In order to guarantee the accuracy and
credibility of a computer simulation model, validation needs to be done. It is
important to note that simulation models cannot match a real world system
exactly. However, the objective is to develop a model that closely emulates
reality, that is, one that is accurate and credible to the extent required by
the decision maker to facilitate making a good decision.
3.2. SIMULATION MODEL VALIDATION 53

Various denitions for model validation exist in the literature. For exam-
ple, Sargent (2003) denes validation as, the substantiation that a comput-
erized model within its domain of applicability possesses a satisfactory range
of accuracy consistent with the intended application of the model.
The following are practical validation approaches (based on Sargent
(2003)) we found useful in construction engineering and management simu-
lation modelling applications:

ˆ Face validity: Face validation involves domain experts and users of


the model evaluating the model output for correctness. This type of
validation has the advantage of expediting the condence building pro-
cess of users through their involvement in the model development and
validation process (Banks et al., 2010; Carson, 2002).

ˆ Comparison to other models: Various results (e.g., outputs) of the sim-


ulation model being validated are compared to results of other (valid)
models. For example, simple cases of a simulation model are compared
to known results of analytic models like a queuing model or the sim-
ulation model is compared to other simulation models that have been
validated for simpler cases.

ˆ Degenerate tests: The degeneracy of the model's behaviour is tested


by appropriate selection of values of the input and internal parame-
ters. For example, does the average number in the queue of the bay of
an equipment repair problem continue to increase over time when the
number of equipment is increased?

ˆ Event validity: The events of occurrences of the simulation model are


compared to those of the real system to determine if they are similar.
For example, compare the number of major catastrophic failures in a
shovel over its service time, or the events where the weather dips below
-50 °C.

ˆ Historical data validation: If historical data exist (or if data are col-
lected on a system for building or testing a model), part of the data is
used to build the model and the remaining data are used to determine
(test) whether the model behaves as the system does. We commonly
use this in training articial neural networks.
54 CHAPTER 3. VERIFICATION AND VALIDATION

Investigating the validity of a simulation model is not dierent from hy-


pothesis testing undertakings in traditional research studies. It entails per-
forming tests that attempt to demonstrate that the model is valid for its
application domain. A simulation model is considered invalid for a spe-
cic set of conditions if the accuracy of its output falls outside a prescribed
acceptable range (Sargent, 2007). When developing models, simulationists
normally have to balance between the model's condence and the cost of de-
veloping the model. The more development, the more the cost, and generally,
the more condence in the model, as demonstrated in Figure 3.1 (adapted
from Sargent (2007)). The simulationist will, however, experience dimin-
ishing returns after a certain investment in the model, as demonstrated in
Figure 3.1. Generally speaking, an eective strategy is to perform tests that
have been appropriately selected and designed to demonstrate the validity of
the simulation for its intended purpose under a given set of conditions. The
simulation model is deemed invalid if it fails any of these tests. It is possible
to make certain types of errors when validating simulation models. These in-
clude Type I and Type II errors. A Type I error occurs when a valid model is
rejected erroneously as being invalid. Type II error on the other hand occurs
when an invalid model is accepted as valid. Type I error is often referred to
as a model builder's risk because of the likelihood of their modelling eorts
being disregard unfairly under the pretext that they are worthless. Type
II error is sometimes referred to as model users' risk because of the likely

Figure 3.1: Decision Variables for Model Validation (Sargent, 2007)


3.2. SIMULATION MODEL VALIDATION 55

consequences associated with bad results arising from their reliance on an


invalid model.
It is important to highlight how the above noted validation approaches
can be used at dierent stages of the simulation model development process
that was presented in Chapter 1. These are discussed next.

3.2.1 Conceptual Model Validation


Conceptual modelling involves the formalism/abstraction, design and docu-
mentation of a number of dierent aspects related to the system or operation
under analysis. Identifying and documenting the specications and require-
ments of the domain, i.e., its environment and boundaries, is one aspect of
conceptual modelling. Documentation of concept models takes the forms
highlighted in the discussion of model development. Other aspects include
detailing the dierent constructs that exist in the domain, their interaction
with each other and the environment or domain boundaries. These con-
structs represent the internal structure of the model, model parameters or
variables, and relationships (i.e., mathematical, logical or causal) between
these. Formulation and documentation of assumptions is another important
aspect of conceptual modelling. Examples of assumptions include the way
priorities are assigned for the allocation of resources; statistical distribution
types chosen to model arrival processes, probability threshold values used to
model stochastic processes that involve forking, etc.
Validation of a conceptual model entails determining whether the right
constructs have been abstracted and appropriately represented to a desirable
level of detail. It also involves conrming the correctness of the abstracted
domain boundaries. Experts can be used for this sort of assessment in a face
validation exercise. For purposes of validating the types of relations used
and assumptions made, data of the real system can be used. For example,
input modelling could be performed on this data to conrm the accuracy of
distribution types selected.

3.2.2 Input Data Validation


Data plays a vital role in simulation model development and validation. It
is important to have good quality data sets of reasonable quantity. Prior to
the use of any data, it is necessary to carry out preparation work on it. The
objective of performing data validation is not to modify bad data but rather
56 CHAPTER 3. VERIFICATION AND VALIDATION

to assess the tness of data for use and disregard that which is found to be
bad. In addition to disregarding bad data, one can make recommendations
on good collection and archiving procedures.
In the context of simulation model development and validation, data may
be used in one of two ways, i.e., operational model validation and conceptual
model development. Data used in operational validation of simulation models
can be categorized into two for convenience. These categories are input data
and output data. Data may also be used in the generation of mathematical or
logical relationships that are in turn used in the development of the concept
and model.
Prior to utilizing data in the model development and validation processes,
it should be subjected to a number of tests to conrm its validity. According
to Sargent (2007), these tests may include internal consistency checks and
checks to establish the existence and correctness of outliers.

3.2.3 Operational Validation


Operational validation is the evaluation of a simulation model for validity.
It mainly entails making use of the simulation model's outputs to determine
whether the model is valid for the purpose it was intended. It is advisable to
perform this type of validation at the tail end of the validation process given
that any weaknesses found in this aspect of validation will have arisen from
earlier development stages. All validation techniques previously presented
can be applied in the validation of the simulation model. There are two
ways that operational validation can be performed. These include 1) an
exploration of the model behaviour, and 2) comparison of the outputs of the
model being validated to outputs of a similar system/operation.
The rst strategy, i.e., exploration of model behaviour, entails studying
the outputs of the model in isolation without comparison to other types
of output. The simulation model can be run in extreme and average experi-
mental conditions to assess whether the magnitude of its outputs change, the
direction in which they change, i.e., increase or decrease, and by how much
they change. Also, the absolute magnitude of model outputs for each of the
given set of experimental conditions can be assessed as a form of operational
validation of a simulation model.
In the second operational validation strategy, simulation model outputs
are compared to outputs from another credible source. The credible source
may be one or both of the following: a real system/operation under study
3.3. SIMULATION MODEL ACCREDITATION 57

that is observable or measureable, and model(s) that emulate(s) the real


system/operation.
There are three ways in which the simulation model output and the output
of the real system or other credible models can be compared. These include
the use of graphical representations, condence intervals, and hypothesis tests
(Sargent, 2007).

3.3 Simulation Model Accreditation


Simulation model accreditation is dened as the ocial certication that a
model and associated data are acceptable for use for a specic application
(DoD, 2003).
Model accreditation should typically be performed by a third party who
determines the model's ease of use for the intended user, the model's validity
and its reliability. The objective of model accreditation is to conrm that
the simulation model and its accompanying documentation conform to all
the modelling requirements/specications.
58 CHAPTER 3. VERIFICATION AND VALIDATION
Chapter 4
Modelling with CYCLONE
In Chapter 1, we noted that we need models to help us describe:

1. the product components to be built (the physical aspects of the prod-


uct),

2. the process that we envisioned for building each component,

3. the methods and resources required, when they are involved and how
they combine to complete the work,

4. the relative order of the components, and

5. the interfaces between components and the external environment.

With advancements in computer software, the above requirements can be


achieved by describing the world in the form of software objects and their
interactions. This is often achieved through simulation modelling languages.
In this chapter, we discuss the CYCLONE (CYClic Operations Networks)
modelling language (Halpin, 1977).
CYCLONE was amongst the rst simulation languages developed for use
in construction. It was mainly geared towards modelling construction pro-
cesses, rather than an entire construction system. With this focus, the mod-
elling language would be easy to learn and quick to deploy for analyzing
construction processes.
In CYCLONE, the construction process is abstracted and represented in
the form of operations and processes that are composed of tasks and queues.
We model the abstracted process using a set of graphical modelling elements

59
60 CHAPTER 4. MODELLING WITH CYCLONE

and directional arrows. Then, we use virtual entities that represent resources
and follow their journeys in the model to describe the dynamic aspect of the
construction process. The simulation is done using a computer, but can also
be done manually for simple models.
In this chapter we will outline how to develop simulation models using
CYCLONE and how to simulate them within Simphony. First, we cover
the graphical modelling elements of CYCLONE and their rules. Then, we
will detail developing CYCLONE models, hand simulation, and computer
simulation. We will conclude with practical applications of construction pro-
cesses.

4.1 A Motivational Example


Have you ever waited in a bank teller or coee shop line and tried to gure
out why it was taking so long? The wait gets longer and you start analyzing
in your mind what is going on:

ˆ What if there were more servers?

ˆ What if the server was more ecient?

ˆ Why don't they have a separate line for those with picky orders that
take forever? My order is straight forward!

ˆ How long is the average customer waiting and at what point do they
decide it is not worth it and go somewhere else?

You then start planning how they could change the system to make it more
ecient! You are already simulating.

Server Customer Queue

Figure 4.1: Schematic of a Coee Shop


4.1. A MOTIVATIONAL EXAMPLE 61

Whether it is a construction process, a bank teller, a truck and shovel,


or the coee shop, process interaction simulation enables you to model and
analyze these types of systems. When you build a simulation model, you can
analyze any aspect of the system, collect statistics to assess its performance
and redesign it to suit your objectives. To do this, you need a modelling tool
and information about the system.
Let's suppose that you have been hired by the coee shop owner to study
the shop and report to her about whether the waiting time is reasonable
or not. She needs to know if the wait is costing her customers and has
established that if on average, a customer has to wait more than 5 minutes
in line, they will not return.
When observing the coee shop line, there are two important events that
matter. The rst event is the arrival of a new customer who begins to wait
in line. The second event is related to the service: when a customer is served
they leave the line.
For the owner, what is important is recording the time the customer
enters the queue (the rst event) and then recording the time when they
leave the queue (after completing service). The dierence between the two
times is the waiting time in the queue. If we record this for all customers,
then we can nd the average waiting time and report on it. If it is higher
than 5, then we would redesign the coee shop queue; we can add another
waiting line, use more servers, etc.
Let us assume we can abstract the real world by virtue of using two
graphical elements: a queue Q to represent waiting of entities and a box  
to represent the activities. Figure 4.2 is a crude simulation model of the coee
shop using these two elements. In this model, customers originate from an
innite queue represented by the Q" labelled Customer Pool." Customers
arrive at the shop randomly over time, and their arrival is represented by
the box labelled Arrival." This box will represent the time elapsed between
two customers. This is determined using Table 4.1, data that was collected
by observing when arrivals take place for an entire morning. As soon as
customers have been deemed to have arrived at the shop, they leave the
box and get into the coee queue, shown as the Q" labelled Customer
Queue." There, they wait until the server who is in the Q" labelled Server"
completes the service for the customer in the box labelled Service." Once
that is done, the customer leaves, and the next in line enters the service
box. The departing customer goes into the Q" labelled Served Customers,"
which is their nal destination. Let's do a quick simulation of this system
62 CHAPTER 4. MODELLING WITH CYCLONE

Server Service Customer Queue

Served Customers Arrival

Customer Pool

Figure 4.2: Simulation Model of a Coee Shop

Table 4.1: Coee Shop Customer Data

Customer Arrival Service


No. Time (min) Time (min)
1 3.34 3.01
2 5.54 2.78
3 8.05 4.57
4 18.55 4.21
5 21.66 3.99
4.1. A MOTIVATIONAL EXAMPLE 63

on paper. We'll assume that customers will arrive and be served according
to the data shown in Table 4.1.
At the start of simulation, our model might look like Figure 4.3. The
stars in the Q" labelled Customer Pool" represent the 5 customers who will
be arriving at the coee shop, while the star in the Q" labelled Server"
represents the single server working at the shop.

Figure 4.3: State of Coee Shop at Time 0.00

Once simulation begins, the ve customers will move from the Q" labelled
Customer Pool" to the box labelled Arrival." Once there, the box will hold
them for the amount of time specied in the second column of Table 4.1, i.e.,
it will hold one of the customers for 3.34 minutes, one for 5.54 minutes, one
for 8.05 minutes, and so on. The state of the model at this point is shown in
Figure 4.4.

Figure 4.4: State of Coee Shop at Time 0.00

The model will remain in this state for 3.34 minutes as nothing else can
happen until the rst customer arrives at the coee shop. Once this amount
of time has elapsed, a customer will exit the box labelled Arrival" and enter
the Q" labelled Customer Queue." The state of the model at this point is
shown in Figure 4.5.
64 CHAPTER 4. MODELLING WITH CYCLONE

Figure 4.5: State of Coee Shop at Time 3.34

The customer is now waiting to be served, and since the server is currently
idle, this process can begin immediately. Both the server and the customer
move from the Q" they're currently located in to the box labelled Service,"
as shown in Figure 4.6. From Table 4.1, we see that they will remain in that
box for 3.01 minutes, i.e., they will leave it when the simulation time reaches
3.34 (the current simulation time), plus 3.01 (the service duration), so after
6.35 minutes.

Figure 4.6: State of Coee Shop at Time 3.34

Nothing else of interest can happen at this point as no other customers


have arrived. Notice, however, that before service of the rst customer n-
ishes, the second customer is scheduled to arrive (at time 5.54). When this
happens, a customer will move from the box labelled Arrival" to the Q"
labelled "Customer Queue," as shown in Figure 4.7. This time though, the
customer will be forced to wait as the server is still engaged with the rst
customer.
4.1. A MOTIVATIONAL EXAMPLE 65

Figure 4.7: State of Coee Shop at Time 5.54

Nothing else can happen in the model until the server nishes serving
the rst customer at time 6.35. When the simulation reaches that point, the
rst customer will leave the box labelled Service" and enter the Q" labelled
Served Customers," while the server will leave the box and return to the
Q" labelled Server." The state of the model is shown in Figure 4.8.

Figure 4.8: State of Coee Shop at Time 6.35

Now that the rst customer has left the system, we should calculate
the amount of time he/she spent in the coee shop. Looking back on our
discussion, we see that he/she arrived at time 3.34 and left at time 6.35, so
he/she spent a total of 6.35 − 3.34 = 3.01 minutes in the system.
Having nished with the rst customer, the server is now available to serve
the second. The server and the second customer both leave their respective
Qs" and enter the box labelled Service." The state of the model at this
point is shown in Figure 4.9.
66 CHAPTER 4. MODELLING WITH CYCLONE

Figure 4.9: State of Coee Shop at Time 6.35

It takes 2.78 minutes to serve the second customer, so they will be held
in the Service" box until time 6.35 + 2.78 = 9.13. Before the simulation
can reach this point, however, the third customer is scheduled to arrive (at
time 8.05). When this happens, a customer will move from the box labelled
Arrival" to the Q" labelled Customer Queue," as shown in Figure 4.10.
As with the second customer, this one will be forced to wait as the server is
busy.

Figure 4.10: State of Coee Shop at Time 8.05

Nothing further will happen until the second customer exits the shop
at time 9.13. When the simulation reaches this point, the second customer
will leave the box labelled Service" and enter the Q" labelled Served Cus-
tomers," while the server will leave the same box and return to the Q" la-
belled Server." The state of the model at this point is shown in Figure 4.11.
4.2. CYCLONE 67

Figure 4.11: State of Coee Shop at Time 9.13

Now that the second customer has left the system, we should calculate the
amount of time she spent in the coee shop. Looking back on our discussion,
we see that she arrived at time 5.54 and left at time 9.13, so she spent a total
of 9.13 − 5.54. = 3.59 minutes in the system.
We leave it as an exercise for the reader to continue this simulation until
the fth and nal customer exits the coee shop at time 26.75. The state of
the system at that point is shown in Figure 4.12.

Figure 4.12: State of Coee Shop at Time 26.75

The results of our simulation are shown in Table 4.2. From these results,
we can see that on average, each customer spends 4.31 minutes in the coee
shop, which is under the 5 minutes the owner hopes to achieve.

4.2 CYCLONE
CYCLONE, which stands for CYCLic Operations Network, is a construc-
tion simulation language introduced by Halpin in 1977. Halpin's approach
revolves around the concept that construction operations can be abstracted
in the form of cyclic networks of modelling elements that represent the tran-
sition of resources between two states: an active state and an idle state.
68 CHAPTER 4. MODELLING WITH CYCLONE

Table 4.2: Coee Shop Simulation Results

Customer Arrival Service Time in


No. Time (min) Time (min) System (min)
1 3.34 3.01 3.01
2 5.54 2.78 3.59
3 8.05 4.57 5.65
4 18.55 4.21 4.21
5 21.66 3.99 5.09

To create a CYCLONE model, one simply has to think of a resource


as a virtual entity that is being processed in the network through a series
of modelling elements. By following the life cycle of the resource, we can
represent a real system.

4.2.1 Fundamental Concepts


In order to be able to build models in CYCLONE, we need to introduce a
few concepts:

1. Models are composed of modelling elements and virtual entities that


ow from one element to the other until simulation is terminated (a
certain condition is met to end the simulation).

2. We try to create a model by describing the real-world systems they


represent by following the journey of the virtual entities between the
modelling elements.

3. The modelling elements process the virtual entities as they arrive and
release them to subsequent elements upon completion.

There are two basic elements in CYCLONE: a Task element and a Queue
element. The Task element represents the active state of a resource and can
be of two forms: constrained, requiring a combination of resources prior to
allowing resources to ow to the next element (called a Combi), or uncon-
strained, where resources ow through it unhindered. The Queue element
represents the idle state of the resource. That is where resources that cannot
proceed to other elements wait. There are other elements to regulate the
ow of resources in the model. Those include the following:
4.2. CYCLONE 69

ˆ Queue element (Queue),

ˆ Combination task (Combi),

ˆ Normal task (Normal),

ˆ Production counter,

ˆ Function, and

ˆ Probabilistic branch (not an original CYCLONE element).

Models also include entities (abstract elements) and arrows that connect
elements and dictate direction of ow for entities emanating from the element.
The simplied model in Figure 4.13 demonstrates the modelling principles
in CYCLONE.

Figure 4.13: Fragment of a CYCLONE Model

The incomplete model in Figure 4.13 demonstrates two basic elements,


the Queue and the Task. Let us refer to the element by the label shown
beneath it. The stars (*) in the model represent the virtual entities: Queue
1 has two entities in it while Queue 2 has one entity prior to the simulation
starting. Task 3 is a constrained task which requires 10 minutes to complete,
while Task 4 is unconstrained and requires 20 minutes to complete. When
entities arrive at Task 3, they are delayed for 10 minutes before they are
released. The directional arrows show that entities will ow from Queue 1 to
Task 3 and from Queue 2 to Task 3. Once Task 3 is done with processing the
entities (10 minutes after commencement of the task), one entity will ow to
Task 4, and one to Queue 1; 20 minutes after commencing Task 4, the entity
is released.
When we simulate the model in Figure 4.13, entities start in Queues 1
and 2, and then move into Task 3 at time zero. Once Task 3 is complete, one
70 CHAPTER 4. MODELLING WITH CYCLONE

of the entities will ow to Queue 2 and one to Task 4. The following sections
detail each of the elements and their functions.

The Queue Element


As its name implies, the main function of a Queue element is
to provide a location in the model for entities to queue while
awaiting conditions to be met downstream so they can proceed to
the next element. Entities ow through a Queue element during
their journey through the model. As soon as they arrive at the
Queue element, they enter the queue in a rst in rst out (FIFO) order.
Once conditions are met for their release (see the Combi element below),
the waiting entities are routed to the next element through the connecting
arrows.
The input point of a Queue element may be connected to any type of
element except another Queue. The output point of a Queue element may
only be connected to Combi elements. It is also permissible for the input
and/or output point of a Queue element to remain unconnected.
A Queue element has two functions in CYCLONE. First, it initializes
entities within the model at the beginning of simulationall CYCLONE
entities begin their life-cycle inside queue elements. Second, it provides a
location for entities to queue while awaiting release to Combi elements.
Each Queue element has the following properties:

InitialLength (input): The number of entities within the Queue at the


start of simulation.

ReportStatistics (input): A boolean value indicating whether or not the


Queue should appear in the Statistics Report that Simphony generates.

CurrentLength (output): The number of entities in the Queue at the end


of simulation.

FileLength (statistic): An intrinsic (time-dependent) statistic describing


the length of the File over time. An intrinsic statistic is one in which
the amount of time the statistic holds a particular value must be taken
into account when calculating statistical estimators such as mean and
variance.
4.2. CYCLONE 71

PercentNonEmpty (statistic): An intrinsic (i.e. time-dependent) statis-


tic describing the amount of time the Queue had one or more entities
waiting in it.

WaitingTime (statistic): A non-intrinsic (i.e. time-independent) statistic


that describes the amount of time that entities needed to wait in the
Queue. A non-intrinsic statistic is one in which the amount of time
the statistic holds a particular value is not considered when calculating
statistical estimators such as mean and variance.

The Combi Element


The Combi element represents a constraint. During simulation,
entities can ow into the Combi from preceding queues, provided
that each preceding queue contains at least one entity. The en-
tities, therefore, wait in their respective queue until every queue
preceding the Combi element has one entity in it. Once this con-
dition is met, the entities are advanced to the Combi where they will wait
for a given duration specied by the Duration property. Once the dura-
tion passes, entities are released to every element connected to and following
the Combi. The reader should note that in CYCLONE, entities are not
truly tracked in the simulation. They are a convenient way to visualize the
scheduling of tasks in the model. For example, if a Combi is preceded by
three Queue elements and it receives three entities, but is only connected
to two subsequent elements, it will only release two entities, even though it
received three.
The input point of a Combi element may only be connected to Queue
elements. The output point of a Combi element may be connected to any
type of element except a Combi. It is permissible for the input and/or output
point of a Combi element to remain unconnected.
When connected to a single Queue element, entities ow into a Combi
element without restriction and are processed simultaneously (i.e., held for
the same duration). On the other hand, when connected to multiple Queue
elements, the Combi will not commence processing until there is at least one
entity available in each of the Queue elements it is connected to. Once this
condition is fullled, only one entity is drawn from each queue node, held in
the Combi for the specied duration, and then just one entity is released.
Each Combi element has the following properties:
72 CHAPTER 4. MODELLING WITH CYCLONE

Duration (input): The duration of the task. The time can be constant or
random as required, though the value should never be negative. Be
especially wary of probability distributions that are unbound below
(the normal distribution, for example). Simphony will issue a warning
if you specify such a distribution.

Priority (input): A Combi element with a higher priority will get prefer-
ence in receiving entities from Queue elements over those with a lower
priority.

ReportStatistics (input): A boolean value indicating whether or not the


statistics associated with the Combi should appear in the Statistics
Report that Simphony generates.

InterArrivalTime (statistic): A non-intrinsic (time-independent) statis-


tic describing the amount of time between entity arrivals. Every time
an entity arrives at the element, the amount of time that has passed
since the previous entity arrived is collected.

The Normal (Task) Element


The Normal element simulates the processing of an entity when it
completes a certain task. The element receives an entity, holds it
for a specied duration and then releases it. The Normal element
can process an innite number of entities simultaneously, so it
represents an unconstrained task.
The input point of a Normal element may be connected to any type of
element except a Queue. The output point of a Normal element may be
connected to any type of element except a Combi. It is permissible for the
input and/or output point of a Normal element to remain unconnected.
Entities are routed into a Normal element without the need to fulll any
requirements other than arriving at the Normal. Upon arrival, an entity is
held for the duration specied by the user, after which it is released. Similar
to the Combi, what matters in CYCLONE is that the arrows connect to the
Normal. For example, if the Normal is followed by and connected to two
elements, the released entity will duplicate and release into both succeeding
elements. A Normal element can process an unlimited number of entities
simultaneously, because it has no restrictions on the number of available
servers.
4.2. CYCLONE 73

Each Normal element has the following properties:

Duration (input): The duration of the task. The time can be constant or
random as required, though the value should never be negative. Be
especially wary of probability distributions that are unbound below
(the normal distribution, for example). Simphony will issue a warning
if you specify such a distribution.

ReportStatistics (input): A boolean value indicating whether or not the


statistics associated with the Normal should appear in the Statistics
Report that Simphony generates.

InterArrivalTime (statistic): A non-intrinsic (time-independent) statis-


tic describing the amount of time between entity arrivals. Every time
an entity arrives at the element, the amount of time that has passed
since the previous entity arrived is collected.

The Function Element


The Function element in CYCLONE enables the simulationist to
manipulate entities owing through the model to achieve specic
eects. In particular, this element facilitates the consolidation
and generation of entities depending on the values assigned to
its properties. The consolidate and generate behaviours of the
Function element are activated when their respective properties are set to a
value other than 1.0. The Function element retains entities that are arriving
until the number specied in its DivideBy property is achieved after which it
generates entities equal to the number specied in its MultiplyBy property
and releases them at the same time. Note that the consolidation of entities
could result in a delay in their journey.
The input point of a Function element may be connected to any type of
element except a Queue. The output point of a Function element may be
connected to any type of element except a Combi. It is permissible for the
input and/or output point of a Function element to remain unconnected.
Each Function element has the following properties:

DivideBy (input): The number of entities to consolidate.

MultiplyBy (input): The number of entities to generate.


74 CHAPTER 4. MODELLING WITH CYCLONE

The Counter Element


The Counter element measures production in a CYCLONE
model by recording the time an entity passes through it. The
user can specify a value to record as a multiplier at this ele-
ment to better reect production as the entity passes through
the Counter. The Counter in CYCLONE can also be used to
terminate the simulation. When a user specied value in the Counter is
reached, the simulation is complete.
The input point of a Counter element may be connected to any type of
element except a Queue. The output point of a Counter element may be
connected to any type of element except a Combi. It is permissible for the
input and/or output point of a Counter element to remain unconnected.
Entities routed into this element do not need to fulll any constraints.
These entities ow through the element without any delays, and trigger cal-
culations causing the count, time, inter-arrival-time and productivity prop-
erties to be updated. In Simphony, a CYCLONE model can have as many
counters as the simulationist requires, which is not necessarily the case with
other CYCLONE simulators. The counter element is useful for determining
the termination of simulation and also for monitoring things taking place in
a simulation during execution.
Each Counter element has the following properties:
Initial (input): The initial value of the Counter. This property is generally
set to zero.
ReportStatistics (input): A boolean value indicating whether or not the
Counter should appear in the Statistics Report that Simphony gener-
ates.
Step (input): The amount the counter should be incremented with each
passing entity. Normally, this property is set to 1; however, it is some-
times useful to set it to a value that represents the capacity" of the
passing entity. For example, in an earthmoving model it could be set
to the capacity of the truck so that the counter is counting cubic me-
ters of dirt delivered rather than truck cycles. It is possible to set this
property to a negative value so that the counter is counting down.
Limit (input): The count value at which simulation will be terminated.
Leave this property set to zero if the counter is not limiting.
4.2. CYCLONE 75

Count (output): The number of entities that passed through the Counter
during simulation.

Time (output): The simulation time at which the most recent passing en-
tity was observed. Note that this need not be the time at which simu-
lation nished, although if the Counter was responsible for terminating
simulation (i.e., the limit was reached), it will be.

InterArrivalTime (statistic): A non-intrinsic (time-independent) statis-


tic describing the amount of time between entity arrivals. Every time
an entity arrives at the element, the amount of time that has passed
since the previous entity arrived is collected.

Production (statistic): A non-intrinsic (time-independent) statistic de-


scribing how the production (i.e., the value of the Count property)
changes over time.

ProductionRate (statistic): A non-intrinsic (time-independent) statistic


describing how the production rate (i.e., the ratio of the Count property
to the simulation time) changes over time.

The (Probabilistic) Branch Element


The Branch element (which is unique to Simphony and not part
of the original CYCLONE specication) is used to model uncer-
tainty associated with events in systems being modelled. Every
time an entity arrives at the element, a random number between
0 and 1 is sampled, which is used to make a decision on whether
the entity is to be routed out through the top or bottom branch.
The input point of a Branch element may be connected to any type of
element except a Queue. The output points of a Branch element may be
connected to any type of element except a Combi. It is permissible for the
input and/or output points of a Branch element to be unconnected. Note
that unlike a regular branch, this is an element by itself, and we may therefore
connect two incoming branches to it without a problem.
Each Branch element has the following properties:

Probability (input): The probability that an arriving entity will be routed


through the topmost branch. This value must be between 0 and 1.
76 CHAPTER 4. MODELLING WITH CYCLONE

The Composite Element


The Composite element is a Simphony modelling element com-
mon to all Simphony templates including CYCLONE. Although
it is not part of the CYCLONE system, it can be used to enhance
model presentation as it serves as a container for sub models.
It does not participate in the simulation, but rather, facilitates
modellers in creating neat and readable models. Input and output points
can be dynamically added and removed from this element.
Each Composite element has the following properties:

Inputs (input): The number of input points the element should have.

Outputs (input): The number of output points the element should have.

4.2.2 A CYCLONE Earthmoving Operation


Earthmoving operations are generally classied as heavy civil works within
the construction domain because of the enormous scope of work involved
and the large amount of heavy equipment required. Another attribute of
earth-moving operations is that they involve multiple interacting cycles that
constrain the rate at which the work can be executed. Each of these cycles is
characterized by uncertainties that make it dicult for practitioners within
this domain to generate reliable estimates of the production rates of the
operation, hence, further complicating budgeting and scheduling processes.
Simulating the logical sequence, the dynamics and uncertainty of such op-
erations solves this problem in a timely and cost-eective manner. Moreover,
the simulationist can experiment with dierent scenarios to gain insights into
how to improve the performance of the operation, replacing current improve-
ment approaches based on gut feelings and trial and error. Parameters that
can be used to track the performance of such a system include: equipment
utilization, truck queue lengths, average waiting times in queue, and pro-
duction rates. The most commonly tracked performance measure is the pro-
duction rate (because of the reasons highlighted in the previous paragraph).
The production rate of an earth-moving operation will start at a value of
zero and increase until a particular threshold value where it stabilizes and is
maintained within certain limits (illustrated in the example). This is because
when the operation commences, the system goes into an unsteady production
state where only a few of the pieces of equipment are participating in the
4.2. CYCLONE 77

production, due to natural eects of start-up of the crews/operators (crews


gaining their work rhythm) and a pile up of entities (trucks) in the queues
that are waiting to be served. When the crews achieve their work rhythm
and all equipment dedicated to the operation is contributing to the sys-
tem's production, the system is said to have achieved a steady state. System
production will have achieved its maximum potential given its constraints
(number of equipment and their capacities). The production rate will be
sustained at that maximum, within limits that are dependent on stochastic
factors that typically aect such operations, such as: variable truck travel
speeds, and variable loader loading cycle times.
A signicant part of the simulation modelling task typically involves
the abstraction and implementation of part of or the entire operation on
a computer within a suitable simulation environment. In this book, we
use Simphony as our simulation modelling environment. Simphony provides
three templates that can be used for modelling earthmoving operations: the
CYCLONE template, General Purpose template and Earth-moving tem-
plate. In this section of the book, we will demonstrate how to use the
CYCLONE template to model typical earthmoving operations. Once a sim-
ulation model of an earth-moving operation has been constructed within a
suitable environment and checked to make sure that it passes all integrity
checks, it can be reliably used to guide a simulationist on how to achieve
a desired production rate by simply varying the number of equipment, the
capacity of equipment or through an approach that involves varying both
parameters.

Problem Statement
A construction project has been dened as per the scope of work it involves.
The contractor selected to do that work has chosen to dedicate one excavator
to the operation. Details of the operation are as follows:

ˆ a backhoe excavates a nite volume of dirt and places it in a dirt pile,

ˆ a front-end loader picks dirt from this pile and places it onto waiting
trucks,

ˆ loaded trucks travel to the dumpsite,

ˆ trucks are spotted one-at-a-time for dumping,


78 CHAPTER 4. MODELLING WITH CYCLONE

ˆ a dozer then spreads each dumped pile of dirt,

ˆ empty trucks return for re-loading, and

ˆ ignore the eects of trac, road proles (grade) and road roughness.

The contractor has no choice as far as truck capacity is concerned but would
like to maximize the production that his other equipment can produce by
committing as many trucks to the project as he/she possibly can. At the
same time, the contractor does not want to have a situation where some of the
trucks that have been committed to the project are redundant because they
will use up part or all of the anticipated prot from the project. A schematic
layout of a simplied earth moving operation is shown in Figure 4.14.

Figure 4.14: Schematic of Earthmoving Operation

The contractor opts to optimize the number of trucks and production


rate for the operation using a simulation-based approach. He/she decides to
construct a CYCLONE model in Simphony.

Solution
The model input parameters, based on the project scope denition, prevail-
ing site conditions and equipment operational attributes (based on manu-
facturer's specications and observations on past similar projects), are sum-
marized in Table 4.3. The layout of the developed model is presented in
Figure 4.15.
4.2. CYCLONE 79

Table 4.3: Earthmoving Data

# Parameter Value
1. Initial quantity of dirt to be excavated (cubic yards) 8,900
2. Track capacity (cubic yards) 8.9
3. Trucks available 5
4. Excavators available at the loading area 1
5. Loaders available at the loading area 1
6. Spotters available at the dumpsite 1
7. Dozers available at the dumpsite 1
8. Excavation duration for 8.9 cubic yards (minutes) 1.2
9. Loading duration for 8.9 cubic yards (minutes) 2.8
10. Haul duration for each truck (minutes) 19.1
11. Return duration for each truck(minutes) 15.6
12. Dumping duration for 8.9 cubic yards (minutes) 2.8
13. Spreading duration for 8.9 cubic yards (minutes) 8.5

Figure 4.15: CYCLONE Model of Earthmoving Operation


80 CHAPTER 4. MODELLING WITH CYCLONE

This model layout has a total of 5 cycles: a dirt excavation cycle (Ex-
cavator Cycle), a loading cycle (Loader Cycle), a truck hauling/return cycle
(Truck Cycle), a spotter cycle, and a dozer dirt spreading cycle (Dozer Cy-
cle). Each of the cycles and the ow units within them will be discussed in
detail in the following paragraphs.
In developing the model, the simulationists make assumptions on what
the virtual entities in the model will represent in the various parts of the
model. At the Queue element labelled Initial Dirt," an initial number of
1,000 entities is entered into the Initial property of that Queue element to
model the total volume of 8,900 cubic yards to be excavated. It is assumed
that each entity represents one truck load (8.9 cubic yards) of dirt. At the
start of the simulation (i.e., at time zero), all of these entities will be created.
Entities labelled Excavators", Loaders," Spotters," and Dozers" are also
created in the Queue elements. These entities represent an excavator, a
loader, spotter, and a dozer, respectively. The initial quantity specied in
the Trucks Queue element is 5, each of the ve entities representing a truck.
Simulation event processing commences with one entity from the Initial
Dirt Queue combining with one entity from the Excavators Queue within
the Combi labelled Excavate. Processing of events within other Combi
elements is not possible because their Queue elements do not have at least
one entity. The combined entity is held within the Excavate Combi for 1.2
minutes (the time the excavator takes to excavate 8.9 cubic yards of dirt).
Thereafter, the entity is released and gets cloned so that the original entity
is routed back into the Excavators Queue element and the cloned entity is
routed into the Excavated Dirt Queue element. The entity routed into the
Excavated Dirt Queue element represents 8.9 cubic yards of excavated dirt
that has been placed in a stock pile. The second cycle for dirt excavation now
starts with the combination of the excavator entity with another 8.9 cubic
yard dirt entity in the Excavate Combi. This cyclic process continues until
the entities in the Initial Dirt Queue run out.
As the second excavation cycle begins, the loading of the rst truck also
begins with an entity from the Excavated Dirt Queue combining with an
entity from the Loaders Queue and an entity from the Trucks Queue
within the Load Combi. The combined entity is held within the Load
Combi for 2.8 minutes (the duration required to load and ll a 8.9-cubic-yard
truck). After this activity, an entity is routed out into the Haul Normal
while another entity is cycled back into the Loaders Queue to begin another
truck loading cycle, if there are entities present in the Excavated Dirt and
4.2. CYCLONE 81

Trucks Queues. The entity entering the Haul Normal represents a loaded
truck traveling from the loading area to the dumpsite. Entities entering this
element will be held for 19.1 minutes.
Loaded trucks arriving at the dumpsite are routed into a Queue element
labelled Dump Queue. Loaded truck entities wait in here for a spotter to
direct them on where to dump their load. There is 1 spotter at the dumpsite
who is represented by an entity which is initialized in the Queue element
labelled `Spotters. When there is a spotter entity in the Queue labelled
Spotters and a loaded truck entity in the Queue labelled DumpQueue",
they get routed into the Combi labelled Dump", triggering the start of the
dumping activity. After the dumping activity, an entity representing an
empty truck is routed out into a Normal labelled Return" while another
entity that represents the 8.9 cubic yards of dumped dirt is routed into a
Queue labelled DumpedDirt". Also, an entity represented the spotter that
is now free, is routed into a Queue labelled Spotters". This makes the
spotter available for the next loaded truck arrival or those that are waiting.
The empty truck entity starts its return journey to the loading area, after
which it is routed into a Queue labelled Trucks" where it waits to begin its
next cycle.
The entity that represents the 8.9 cubic yards of dumped dirt is combined
with a dozer entity from the Dozers" Queue within the Spreading Combi.
The combined entity is held within this Combi element for 8.5 minutes (the
time required for the dozer to spread 8.9 cubic yards of dirt). Thereafter, an
entity that represents 8.9 cubic yards of spread dirt is released and routed into

Table 4.4: Earthmoving Results

No. of Production Rate


Trucks (yd3 /h)
1 13.26
2 26.40
3 39.60
4 52.80
5 62.40
6 62.40
7 62.40
8 62.40
82 CHAPTER 4. MODELLING WITH CYCLONE

a Counter element registering the spread volume produced. The Production


Counter registers observations of the system production rate by registering
the time that each load is spread and by stepping the count by 8.9 cubic
yards. After owing through the counter, this entity is routed into a Queue
element labelled Dozers" representing a dozer that is available for the next
spreading cycle.
These cycles keep on going until the simulation model runs out of dirt
entities that were emanating from the Initial Dirt Queue element. When
the model is run, the production counter reports an overall productivity of
1.04 cubic yards per minute (which works out to 62.40 cubic yards per hour).
We can now repeat this experiment multiple times, with a dierent num-
ber of trucks initialized in the Trucks Queue element each time. After each
run, a production rate for the operation is recorded. The results obtained are
summarized in Table 4.4, from which it can be noted that the fewest trucks
required to achieve maximum production is 5.

4.3 Hand Simulation


Simulation models are generally processed by a computer using a simulation
algorithm, which in our case, is a discrete event processing algorithm. When
a simulation model is run on a computer, a number of things happen behind
the scenes, which are all initiated and managed by a simulation engine. The
objective of discussing hand simulation is to give you an understanding of
what is going on inside this simulation engine.
The discrete event processing algorithm is based on the concepts of events,
entities, and simulation time. An event is dened as an occurrence that
causes the state of the simulation system to change. For example, an event
might be a truck in an earthmoving simulation arriving at the dump site,
thus, causing the state of the truck to change from hauling to dumping;
it may be a welder in a pipe spool fabrication model beginning work on a
spool, thus, changing the state of the spool from tting to welding and,
at the same time, causing the welder to become unavailable to other spools.
An entity is the primary object associated with an event. In these examples,
the entities are the truck and the spool, respectively. Simulation time is the
time at which events occur.
A discrete event simulation engine is responsible for scheduling and pro-
cessing these events. Scheduling events is the process of the simulationist
4.3. HAND SIMULATION 83

informing the simulation engine of precisely when an event will occur. To do


this, the simulationist needs to tell the simulation engine three things:
1. The event that is going to occur (e.g., a truck will arrive at the dump
site or a welder will become available to work on a spool),
2. The entity associated with the event (e.g., the particular truck or spool
to which the event applies), and
3. The simulation time at which the event will occur.
The processing of an event happens when the simulation engine advises the
simulationist that the time has come for a previously scheduled event to
occur. When an event is processed, the simulation engine will tell you the
same three pieces of information that were specied at the time the event was
scheduled, namely, what event is being processed, the entity associated with
the event, and the simulation time. In response to this information, the simu-
lationist will typically update the state of the system and/or schedule further
events. Note that it is not permissible to schedule an event with a simulation
time prior to the event being processed (time doesn't run backwards!).
In order to accomplish these responsibilities, a discrete event simulation
engine requires two things: a list of scheduled events (ordered by simulation
time), and a simulation clock.
The list of scheduled events keeps track of those events that have been
scheduled but not processed. When an event is scheduled it is inserted into
the list at the correct location, and when it is processed it is removed from
the list. The simulation clock keeps track of the current simulation time. It
is initialized to zero at the start of simulation, and thereafter is set to the
simulation time of the event most recently processed.
Hand simulation is the process of emulating a discrete event simulation
engine using paper and pencil. Like any discrete event simulation engine,
this one requires a list of events and a simulation clock. On paper, the list
of simulation events is maintained by using two columns: the rst, typically
labelled Events, records events as they are scheduled. The second, typically
labelled Chronological, records events as they are processed. The simula-
tion clock is emulated by using a third column, typically labelled  TNOW .
The algorithm used to emulate a discrete event simulation engine by
hand is shown in Figure 4.16. We will illustrate how the algorithm works in
a moment by looking at a simple example, but rst we need to discuss the
statistical portion of the algorithm.
84 CHAPTER 4. MODELLING WITH CYCLONE

Record intrinsic
Start Set TNOW = 0 statistical
observations

no

Move entities to Can an activity Is the event yes


1,2 End
the activity yes begin? list empty?

no

Transfer the
Generate a
3
earliest event
duration ∆
on the event list
(possibly
to the
stochastic) for
chronological
the activity
list

NOTES:

1. A Combi can
begin if all
preceding queue
Set TNOW to
Calculate the nodes contain at
the time of the
event time: least one entity.
transferred
TNOW + ∆
event 2. A Normal can
begin if any
preceding activity
has released an
entity.

3. In the case of a
tie, the earliest
event is
Record the Release entities
considered to be
event in the from the
the one that was
event list activity
scheduled (i.e.,

Generation Phase Advancement Phase recorded) rst.

Figure 4.16: Hand Simulation Algorithm


4.3. HAND SIMULATION 85

4.3.1 Hand Simulation with Statistics


Most simulation engines provide services for collecting and displaying statis-
tics of observations made during simulation. Examples of such statistics
include resource utilization, production rates, waiting times, and le lengths.
The statistics collected during simulation fall into two broad categories:
intrinsic (time-dependent) and non-intrinsic (time-independent). An intrin-
sic statistic is one in which the amount of time that a particular observation
is in place must be taken into account when calculating the average (or any of
the other familiar statistical estimators: variance, standard deviation, etc.).
In other words, for intrinsic statistics we need to take a weighted average.
The observations need to be weighted by the amount of time the statistic
held that particular value. Non-intrinsic statistics do not have this require-
ment: the statistical estimators are calculated in the usual way. Examples
of intrinsic statistics include resource utilization and le lengths. Examples
of non-intrinsic statistics include production rates and waiting times.
When performing a hand simulation it is important to distinguish between
the two types of statistics because the steps involved in tracking them are
slightly dierent: for intrinsic statistics, a value must be collected for every
row in the hand simulation, while for non-intrinsic statistics, observations
need only be collected at the relevant times.

4.3.2 Example: A Simple Earthmoving Model


Let's consider the model of a simple earthmoving operation shown in Fig-
ure 4.17. In this operation, two trucks are loaded by a single front-end loader,
they travel to a dump site, dump, and return to begin the cycle anew. It
takes 7 minutes for the loader to load a truck, 17 minutes for a truck to travel
to the dump site, 3 minutes for a truck to dump, and 13 minutes for a truck
to make the return journey. A CYCLONE model of the operation is shown
in Figure 4.18.
This model denes three entities: one entity representing the loader, and
two entities representing two trucks. In our hand simulation, we will be most
interested in the two trucks. From a process perspective, the two trucks are
identical; however, from the perspective of the simulation engine, they are
distinct entities, so in what follows, we'll refer to one truck as A and the
other as B.
86 CHAPTER 4. MODELLING WITH CYCLONE

Figure 4.17: Schematic of a Simple Earthmoving Operation

Figure 4.18: CYCLONE Model of a Simple Earthmoving Operation


4.3. HAND SIMULATION 87

To begin, take a sheet of paper and at the top write headings for the
following columns:  TNOW , Events, Chronological, Prod, and Util.
Note that the fourth and fth columns Prod and Util are not a part of
the simulation engine; instead we'll be using it to track the productivity of
our system (a non-intrinsic statistic) and the utilization of the loader (an
intrinsic statistic), respectively.
The rst step of the algorithm is to set TNOW = 0, so under the heading
 TNOW  write the number zero. Your paper should look something like this:

TNOW Events Chronological Prod Util


0

The next step of the algorithm asks whether an activity can begin. The
answer is yes; the Combi labelled Load can begin because entities are
present in both of its preceding queues. Let's assume that truck A goes
rst, so the entity representing truck A is moved to the Load activity to-
gether with the entity representing the loader. The duration of the Load
activity is 7 minutes and TNOW is currently 0, so the time at which the ac-
tivity will nish is TNOW + ∆ = 0 + 7 = 7. We record this under the Event
List column; the paper now looks like this:

TNOW Events Chronological Prod Util


0 A loaded @ 7

We now move back to the question of whether an activity can begin. This
time the answer is no; truck B is available to be loaded in the Trucks queue,
but the Loader queue is empty because the loader is currently busy with
truck A. We now need to record statistics. In this case we're only concerned
with the utilization as it is intrinsic. As the loader is currently busy, we
record 100% in the Util column. Now we move to the next question: is the
event list empty? Again, the answer is no; the event we just recorded is in
the list. Next, we need to scan the event list for the earliest event, which is
easy as there is only one event. We copy this event into the Chronological
column and cross it out from the Events column. Finally, we need to set
TNOW to the time of this event, so we cross out the 0 in the  TNOW  column
and write a 7 underneath. Our sheet of paper now looks like this:

TNOW Events Chronological Prod Util


0 A loaded @ 7   100%
7 A loaded @ 7
88 CHAPTER 4. MODELLING WITH CYCLONE

At this point, the task of loading truck A is complete and both the truck
and the loader are released from the Load activity. We now return to the
question: can an activity begin? This time the answer is yes; the Load
activity can begin (because the loader and truck B are available in their
respective queues) and the Travel activity can begin (because truck A was
just released from the Load activity). It does not matter which activity we
choose to deal with rst (the algorithm will produce the same results in either
case), so let's pick the Load activity. First, we move the entity representing
truck B and the entity representing the loader to the Load activity, and then
we calculate the event time, which is TNOW + ∆ = 7 + 7 = 14. We record this
event in the event list. Again we ask the question: can an activity begin?
The answer is yes, as we still need to deal with the Travel activity. The entity
representing truck A is now moved to the Travel activity, and the event time
is calculated to be TNOW + ∆ = 7 + 17 = 24. This event is also recorded
under the Event List column. Our sheet of paper now looks like this:
TNOW Events Chronological Prod Util
0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7
A arrives @ 24

Again, we ask the question: can an activity begin? This time the answer
is no; truck A is busy traveling and truck B is being loaded. We therefore
need to record statistics: the loader is still busy (this time with truck B),
so we record 100% in the Util column. Next, the event list isn't empty,
so we need to scan the list for the earliest event, which is the completion of
loading truck B. We cross this event out under the Events column and copy
it to the Chronological column, and then update TNOW to 14. Our sheet
of paper now looks like this:
TNOW Events Chronological Prod Util
0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14

The Load activity is now complete and the entities representing truck
B and the loader are released. We return to the question: can an activity
begin? This time the answer is yes; truck B can begin the Travel activity.
We move the entity to the Travel activity and calculate the nish time as
TNOW + ∆ = 14 + 17 = 31. We record this new event under the Events"
column:
4.3. HAND SIMULATION 89

TNOW Events Chronological Prod Util


0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14
B arrives @ 31

This time, when we ask if an activity can begin, the answer is no; both
trucks are in the process of travelling to the dump site. We record the
utilization of the loader (it's now idle) and scan the event list. We see that
the arrival of truck A at the dump site is the earliest event, so it is crossed
out from the Events" column and transferred to the Chronological" column,
and TNOW is updated to 24:

TNOW Events Chronological Prod Util


0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14  0%
24 B arrives @ 31 A arrives @ 24

The Travel activity is now complete and the entity representing truck A
is released. At this point, truck A can begin the Dump activity. The nish
time for this event is TNOW + ∆ = 24 + 3 = 27, and an event is recorded:

TNOW Events Chronological Prod Util


0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14  0%
24 B arrives @ 31 A arrives @ 24
A dumped @ 27

There are no other activities that can begin at this time, so we record
the utilization and scan the event list for the earliest event. This turns out
to be the event we just scheduled, so that event is crossed out from the
Events" column and transferred to the Chronological column. TNOW is
then updated to 27 and the entity representing truck A is released. At this
point, the truck A entity passes through the production counter, and we
record the production in the Prod column (1 truckload in 27 minutes).
Once it has passed the production counter, truck A can begin the Return
activity. The nish time is TNOW + ∆ = 27 + 13 = 40, and the event is added
to the Events column. Our sheet of paper now looks like this:
90 CHAPTER 4. MODELLING WITH CYCLONE

TNOW Events Chronological Prod Util


0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14  0%
24 B arrives @ 31 A arrives @ 24  0%
27 A dumped @ 27 A dumped @ 27 1/27
A returns @ 40

As no other activities can begin, we need to record the utilization. Next,


as the event list is not empty, we need to scan it for the earliest event. This is
the arrival of truck B at the dump site. The event is removed from the event
list, added to the chronological list, and TNOW is updated to 31. Truck B is
now released from the Travel activity and it can begin the Dump activity.
The nish time is TNOW + ∆ = 31 + 3 = 34, and the event is added to the
event list. Our sheet of paper now looks like this:
TNOW Events Chronological Prod Util
0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14  0%
24 B arrives @ 31 A arrives @ 24  0%
27 A dumped @ 27 A dumped @ 27 1/27 0%
31 A returns @ 40 B arrives @ 31
B dumped @ 34

Once again there are no other activities that can begin, and there are
still events to process. The loader continues to be idle, so we record that
under the Util column. The earliest event in the event list is the one we
just scheduled, so it is removed, transferred to the chronological list, and
TNOW is updated to 34. Truck B is now released from the Dump activity
and passes through the production counter. As with truck A, we record
the production in the production column (2 truckloads in 34 minutes). After
passing the production counter, truck B can begin the Return activity, which
has a nish time of TNOW + ∆ = 34 + 13 = 47. Once this event is added to
the event list, our paper looks like this:
TNOW Events Chronological Prod Util
0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14  0%
24 B arrives @ 31 A arrives @ 24  0%
27 A dumped @ 27 A dumped @ 27 1/27 0%
31 A returns @ 40 B arrives @ 31  0%
34 B dumped @ 34 B dumped @ 34 2/34
B returns @ 47
4.3. HAND SIMULATION 91

Again, there are no other activities that can begin and there are still
events to process. The loader continues to be idle, so we record that under
the Util column. The earliest event is the return of truck A to the loading
site, so this event is removed, transferred to the chronological list, and TNOW
is updated to 40. Truck A is released from the Return activity and can now
begin the Load activity (because the loader is available), which has a nish
time of TNOW + ∆ = 40 + 7 = 47. Once this event is added to the event list,
our paper looks like this:

TNOW Events Chronological Prod Util


0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14  0%
24 B arrives @ 31 A arrives @ 24  0%
27 A dumped @ 27 A dumped @ 27 1/27 0%
31 A returns @ 40 B arrives @ 31  0%
34 B dumped @ 34 B dumped @ 34 2/34 0%
40 B returns @ 47 A returns @ 40
A loaded @ 47

No further activities can begin and the event list still contains events, so
we need to record the utilization (100% this time as the loader is busy with
truck A) and scan the event list for the earliest event. This time the result is
a tie; both trucks are scheduled to complete their activities at time 47. We
need to make use of our tie-breaking procedure, which states that in the case
of a tie, the earliest event is the highest on the list (which will be the event
that was recorded rst). Thus, we will process the return of truck B to the
loading site rst. This event is removed from the event list, transferred to
the chronological list, and TNOW is updated to 47. Truck B is now released
from the Return activity; however, it cannot begin the Load activity as the
loader is still busy with truck A. Our paper now looks like this:

TNOW Events Chronological Prod Util


0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14  0%
24 B arrives @ 31 A arrives @ 24  0%
27 A dumped @ 27 A dumped @ 27 1/27 0%
31 A returns @ 40 B arrives @ 31  0%
34 B dumped @ 34 B dumped @ 34 2/34 0%
40 B returns @ 47 A returns @ 40  100%
47 A loaded @ 47 B returns @ 47
92 CHAPTER 4. MODELLING WITH CYCLONE

At this point, no further activities can begin, so we record the utilization.


There is only one event in the event list (the completion of loading truck
A), so it is transferred to the chronological list. TNOW does not need to
be updated as its value has been updated to 47 previously, but we transfer
its value to the next row nevertheless. Truck A is released from the Load
activity and two activities can now begin: the loading of truck B and the
traveling of truck A. Once these activities are scheduled, our sheet of paper
will look like this:
TNOW Events Chronological Prod Util
0 A loaded @ 7   100%
7 B loaded @ 14 A loaded @ 7  100%
14 A arrives @ 24 B loaded @ 14  0%
24 B arrives @ 31 A arrives @ 24  0%
27 A dumped @ 27 A dumped @ 27 1/27 0%
31 A returns @ 40 B arrives @ 31  0%
34 B dumped @ 34 B dumped @ 34 2/34 0%
40 B returns @ 47 A returns @ 40  100%
47 A loaded @ 47 B returns @ 47  100%
47 B loaded @ 54 A loaded @ 47
A arrives @ 64

Both trucks have now completed a full cycle. We leave it as an exercise


for the reader to continue the simulation for another cycle.
There are a couple of things worth noting about our results. First, the
 TNOW  column clearly shows the variability of the time step; sometimes
TNOW increased by 7, sometimes by 10, and once it didn't increase at all (since
there were two events to process with the same simulation time). This is
entirely characteristic of discrete event simulation. Second, the chronological
list provides us with a story line of what happened during simulation. Even
though the various events were not scheduled in chronological order, the
algorithm ensures that they are processed in the correct order.

Calculation of Productivity and Utilization


The productivity of our system is easy to calculate: it is simply the last
observation recorded by the Counter element in the Prod column, which
is 2/34 ≈ 0.0588 truckloads per minute or 3.53 truckloads per hour. The
average utilization of the loader, however, is a little more dicult because it
is an intrinsic statistic: the observations need to be weighted by the amount
of time the statistic held each particular value.
4.3. HAND SIMULATION 93

The easiest way to perform this calculation is to take the observations


listed in the Util column and plot them with respect to simulation time, to
get the step function shown in Figure 4.19.

100 %

80 %
Utilization

60 %

40 %

20 %

0%
0 5 10 15 20 25 30 35 40 45
Simulation Time (min)

Figure 4.19: Loader Utilization vs. Simulation Time

If we denote this function by f , then the average utilization will be the


area under f divided by the total simulation time:

Z 47
1
f (x) dx = 100%×(14−0)+0%×(40−14)+100%×(47−40) ≈ 44.7%.
47 0

4.3.3 Example: Riprap Installation


A contractor is completing work on a storm sewer outfall that requires riprap
installation along the outfall location, as shown in Figure 4.21. The area is
fairly tight as it is at the riverbank and only a small area is secured for the
worksite. A single skid steer loader is used to provide the required stone to
three labourers working independently, who place each stone by hand. The
skid steer delivers loads of 20 stones each, which it collects from a stockpile
maintained nearby. At the riverbank, there is only sucient room for, at
most, 3 loads (60 stones), so if the labourers have not completed placement
of a load, the skid steer must wait.
94 CHAPTER 4. MODELLING WITH CYCLONE

A CYCLONE model for this operation is shown in Figure 4.22. At the


beginning of simulation, it contains 7 entities: one representing the skid steer
loader, three representing the labourers, and three more representing spaces
for the skid steer to place loads of stone. When the skid steer supplies a load
of stone, the entity representing the available space is converted to 20 entities
representing the stones by a Generate element. As these stones are placed
by the labourers, they are converted back to a single entity representing
available space by a Consolidate element.
A time study was carried out to obtain the durations required for the
resupply and the placement of stones. The cumulative distribution functions
(CDFs) generated from the observed data are presented in Figure 4.20. In
this example, we will approximate generation of random deviates from these
CDFs using the role of a six-sided die. The possible outcomes from this
roll are listed on the y -axis of the charts and the corresponding x-values are
indicated on the curve.
In this example, we will illustrate how to collect an intrinsic statistic
describing the number of stones available for placement by the labours. We
begin by taking a sheet of paper and writing the following headings at the
top:  TNOW , Events, Chronological, and Stones. The rst step of the
hand simulation algorithm is to set TNOW = 0, so under the heading  TNOW 
write the number zero. Your paper should look something like this:
TNOW Events Chronological Stones
0.00

The next step of the algorithm is to ask the question: can an activity
begin? The answer is yes: the skid steer can supply stone for the labourers.
The entity representing the skid steer and the entity representing space for the
Six-Sided Die Roll

Six-Sided Die Roll

6 7.9 6 1.83
5 7.5 5 1.76
4 7.1 4 1.71
3 6.9 3 1.66
2 6.5 2 1.62
1 6.1 1 1.58

5 6 7 8 9 1.5 1.6 1.7 1.8 1.9


Resupply Time (min) Placement Time (min)

Figure 4.20: Riprap Installation CDFs


4.3. HAND SIMULATION 95

Figure 4.21: Schematic of Riprap Installation

Figure 4.22: CYCLONE Model of Riprap Installation


96 CHAPTER 4. MODELLING WITH CYCLONE

stones are moved to the resupply activity and a die roll is made to determine
the duration. The roll results in a 1, so the duration of the activity will be
6.10 minutes. The event time is therefore: TNOW + ∆ = 0.00 + 6.10 = 6.10.
We record this event under the Events column. The paper now looks like
this:
TNOW Events Chronological Stones
0.00 Resupply @ 6.10

At this point no further activities can begin: the skid steer is busy bring-
ing the rst load of stones, and the labourers are idle as they have no stones
to place. We therefore need to record that the number of stones available
to be placed is 0, and then scan the event list for the earliest event. The
resupply of stones at simulation time 6.10 is the only event, so it is crossed
out, transferred to the chronological list, and TNOW is updated to 6.10. The
paper now looks like this:

TNOW Events Chronological Stones


0.00 Resupply @ 6.10  0
6.10 Resupply @ 6.10

The entities are now released from the resupply activity. The skid steer
returns to its queue, and the available space is converted to 20 stones by the
Generate element and all 20 are queued for the labourers. We now return to
the question: can an activity begin? This time there are four activities that
can begin: each of the labourers can begin placing a stone and the skid steer
can begin supply of the next load of stones. To calculate the duration of
the placement activities, the die is rolled three times and the numbers 5, 1,
and 5 result. These correspond to durations of 1.76, 1.58, and 1.76 minutes,
respectively, and end event times of 7.86, 7.68, and 7.86. These three events
are added to the event list. Finally, the die is rolled again to determine the
duration of the resupply activity and the result is 2, so resupply will take 6.5
minutes and complete at simulation time 12.60. This event is also added to
the event list. The paper now looks like this:

TNOW Events Chronological Stones


0.00 Resupply @ 6.10  0
6.10 Labourer A @ 7.86 Resupply @ 6.10
Labourer B @ 7.68
Labourer C @ 7.86
Resupply @ 12.60
4.3. HAND SIMULATION 97

At this point, no further activities can begin, so we need to record intrinsic


statistics. The number of stones sitting in the pile is 17 (the 20 supplied by
the skid steer less the 3 picked up by the labourers), and this is recorded in
the Stones column. Next, we scan the event list for the earliest event. This
is labourer B laying a stone at time 7.68. This event is transferred to the
chronological list and TNOW is updated to 7.68. The paper now looks like
this:
TNOW Events Chronological Stones
0.00 Resupply @ 6.10  0
6.10 Labourer A @ 7.86 Resupply @ 6.10 17
7.68 Labourer B @ 7.68 Labourer B @ 7.68
Labourer C @ 7.86
Resupply @ 12.60

At this point, labourer B can begin placing another stone. The die roll
to determine duration is 6, so placement of the stone will take 1.83 minutes
and nish at time 9.51. After this event is added to the event list, the paper
looks like this:
TNOW Events Chronological Stones
0.00 Resupply @ 6.10  0
6.10 Labourer A @ 7.86 Resupply @ 6.10 17
7.68 Labourer B @ 7.68 Labourer B @ 7.68
Labourer C @ 7.86
Resupply @ 12.60
Labourer B @ 9.51

At this point, no further activities can begin, so we record the number of


stones awaiting placement (16) and scan the event list for the earliest event.
Both labourers A and C are scheduled to place stones at time 7.86, so we need
to make use of our tie-breaking procedure. In this case, the event highest in
the event list is the placing of a stone by labourer A, so that is the event that
gets transferred to the chronological list. Once TNOW is updated to 7.86, the
paper will look like this:

TNOW Events Chronological Stones


0.00 Resupply @ 6.10  0
6.10 Labourer A @ 7.86 Resupply @ 6.10 17
7.68 Labourer B @ 7.68 Labourer B @ 7.68 16
7.86 Labourer C @ 7.86 Labourer A @ 7.86
Resupply @ 12.60
Labourer B @ 9.51
98 CHAPTER 4. MODELLING WITH CYCLONE

We leave it as an exercise for the reader to continue this simulation until


the skid steer has supplied the third load of stone. An example is shown
below:
TNOW Events Chronological Stones
0.00 Resupply @ 6.10  0
6.10 Labourer A @ 7.86 Resupply @ 6.10 17
7.68 Labourer B @ 7.68 Labourer B @ 7.68 16
7.86 Labourer C @ 7.86 Labourer A @ 7.86 15
7.86 Resupply @ 12.60 Labourer C @ 7.86 14
9.44 Labourer B @ 9.51 Labourer A @ 9.44 13
9.51 Labourer A @ 9.44 Labourer B @ 9.51 12
9.52 Labourer C @ 9.52 Labourer C @ 9.52 11
11.10 Labourer A @ 11.15 Labourer C @ 11.10 10
11.15 Labourer B @ 11.17 Labourer A @ 11.15 9
11.17 Labourer C @ 11.10 Labourer B @ 11.17 8
12.60 Labourer C @ 12.86 Resupply @ 12.60 28
12.86 Labourer A @ 12.98 Labourer C @ 12.86 27
12.88 Labourer B @ 12.88 Labourer B @ 12.88 26
12.98 Resupply @ 18.70 Labourer A @ 12.98 25
14.48 Labourer C @ 14.48 Labourer C @ 14.48 24
14.64 Labourer B @ 14.64 Labourer B @ 14.64 23
14.64 Labourer A @ 14.64 Labourer A @ 14.64 22
16.19 Labourer C @ 16.19 Labourer C @ 16.19 21
16.35 Labourer B @ 16.40 Labourer A @ 16.35 20
16.40 Labourer A @ 16.35 Labourer B @ 16.40 19
18.02 Labourer C @ 18.02 Labourer C @ 18.02 18
18.02 Labourer A @ 18.18 Labourer B @ 18.02 17
18.18 Labourer B @ 18.02 Labourer A @ 18.18 16
18.70 Labourer B @ 19.68 Resupply @ 18.70 36
Labourer C @ 19.60
Labourer A @ 19.80

Calculation of the Mean Number of Stones Available


As mentioned above, the number of stones available is an intrinsic statistic.
The best way to calculate the mean of an intrinsic statistic is to plot the value
of the statistic vs. simulation time and then calculate the area under the
resulting step function and divide by the nal simulation time. Figure 4.23
shows this chart for the simulation we just completed.
If we denote this function by f , then the average number of stones avail-
able will be the area under f divided by the total simulation time:
4.4. NORTH LRT CASE STUDY 99

35
Stones Available 30
25
20
15
10
5
0
0 2 4 6 8 10 12 14 16 18
Simulation Time (min)

Figure 4.23: Stones Available vs. Simulation Time

Z 18.7
1 1
f (x) dx = × 214.43 ≈ 11.467.
18.7 0 18.7

4.4 North LRT Case Study


In this section, we demonstrate how simulation can be used to plan a light
rail transit (LRT) tunnel. The project involves a short tunnel that is to be
built using the sequential excavation method (SEM). We use the CYCLONE
template of the Simphony simulation environment to build the model and
then use it to plan construction.

4.4.1 Project Description


The Metro LineNorth LRT (NLRT)1 to the Northern Alberta Institute of
Technology (NAIT)is a 3.3 km extension from Churchill LRT Station in
downtown Edmonton northwest to NAIT, as demonstrated in Figure 4.24.
Consultants were retained by the prime consultant (the design engineer)
to provide a plan for the tunnel section of the project. The scope of the
100 CHAPTER 4. MODELLING WITH CYCLONE

project, therefore, is limited to the tunnel section, shown as dotted green


line on Figure 4.24.
The design documents show that the LRT tunnel section consists of two
parallel 8 m diameter tunnels with a total length of 764 m, extending from the
existing underground Churchill Station to street level at MacEwan station.
The tunnel section is divided into two parts that are demarcated by a pre-
built section of the tunnel under the EPCOR Tower (marked on Figure 4.24
as EPCOR Tower). The ECPOR Tower intersects the LRT alignment, and
was built prior to the LRT. The City negotiated with the builder to pre-build
the LRT tunnel section under the tower to avoid changing the alignment when
the LRT is built. A pre-built section of the tunnel presents challenges to the
construction of the remaining portions, especially as they relate to continuity
of the excavation process. Indeed, such discontinuity renders a TBM (tunnel
boring machine) inecient for the short sections. The team quickly evaluated
those options and decided to focus on a sequential excavation method.

4.4.2 Understanding the SEM Process


Prior to building a simulation model of a process, we need to understand
the construction method itself. There are numerous reference materials on
the New Austrian Tunnelling Method (NATM), which we refer to as the
Sequential Excavation Method (SEM).
Rabcewicz (1948) was issued a patent for the NATM. The approach was
described as follows:

NATM is based on the principle that it is desirable to take utmost


advantage of the capacity of the rock to support itself, by care-
fully and deliberately controlling the forces in the readjustment
process, which takes place in the surrounding rock after a cavity
has been made, and to adapt the chosen support accordingly.

Tunnelling creates a cavity in the ground, which, if left unsupported, will


collapse. To support the ground, the NATM uses shotcrete to redistribute
stress. The shotcrete, when applied uniformly along the cavity, makes a
1
This project is provided courtesy of the City of Edmonton LRT Design and Con-
struction. The design was completed by ILF Engineering as a sub-consultant to AECOM
Canada. The models were built by Hala AbouRizk-Newstead as a project in a construction
simulation course at the University of Alberta, April 24, 2011.
4.4. NORTH LRT CASE STUDY 101

Figure 4.24: North LRT to NAIT Map (City of Edmonton, 2012)


102 CHAPTER 4. MODELLING WITH CYCLONE

complete ring, which takes the stresses from the ground and redistributes
them to the surrounding ground area. However, there are numerous issues
to address. The time the ground can remain stable upon excavation greatly
depends on the geotechnical conditions in the area. Dry sand, for example,
will not support itself and tends to collapse immediately upon excavation.
Hard rock may not require any shotcrete in the interim, and can be bolted
to provide temporary support. In general terms, tunnels will have a primary
liner (material like shotcrete or steel and lagging) to provide support during
construction. Then a secondary liner is used to provide the ultimate support.
Finally, for unstable ground, excavation and support are typically staged in
small sections to avoid leaving large areas unprotected and prone to failure.
This is called benched construction. The tunnel face shown in Figure 4.26
can be subdivided into sections. The top section in this gure is called
the top heading while the bottom section is called the bench/invert section.
We start excavating the top heading, advancing 1 or 2 meters and apply
shotcrete and temporary support at the bottom of the top heading (often
in the form of steel beams called lattice girders). Then, we excavate the
invert section, apply shotcrete to complete the ring, and repeat the process.
Dierent equipment is used depending on the size of the tunnel and the
ground conditions. For example, a backhoe can be used or a rock grinder,
depending on whether the material is soft or hard. The transportation of
the material is done using loaders and trucks. For smaller utility tunnels,
for example, smaller machines are used and often muck cars and a train are
utilized instead of loaders and trucks due to size restrictions and depth of
the tunnel.
The Federal Highway Administration (2009) provides a good summary
of the NATM method for the interested reader.

4.4.3 The North LRT Tunnel


The LRT tunnel we are concerned with was designed by ILF Consultants
Inc. A summary of the preliminary design is given in Figures 4.25 and 4.26.
From the design, the construction method can be summarized as follows:

1. Complete the top section (heading) in two stages (each one is 1 meter
deep).
Stage 1:
4.4.
NORTH LRT CASE STUDY

Figure 4.25: North LRT Excavation and Support General Arrangement


103
MODELLING WITH CYCLONE
CHAPTER 4.

Figure 4.26: North LRT Excavation and Support Standard Support


104
4.4. NORTH LRT CASE STUDY 105

ˆ Drill probe/drainage holes.


ˆ Excavate 1 meter (heading).
ˆ Remove dirt (by conveyor and a truck).
ˆ Apply rst layer of sealing shotcrete to all exposed surfaces.
ˆ Install rst layer of welded wire fabric (WWF).
ˆ Install lattice girders.
ˆ Apply second layer of shotcrete.
ˆ Install additional supports where applicable.
ˆ Install second layer of welded wire fabric (WWF).
ˆ Apply third (and nal) layer of shotcrete.
ˆ Check for utility extension interval.
ˆ Check for surveying interval.

Stage 2:

ˆ Excavate the next meter using the same process as stage 1.

2. Complete the bottom section (bench/invert).


Stage 3:

ˆ Excavate one advance of bench/invert (2 m).


ˆ Apply sealing shotcrete.
ˆ Install rst layer of welded wire fabric (WWF).
ˆ Install lattice girders.
ˆ Install additional measures where applicable.
ˆ Apply second layer of shotcrete.
ˆ Install second layer of welded wire fabric (WWF).
ˆ Install additional support where applicable.
ˆ Apply last layer of shotcrete.
ˆ Place temporary backll.
106 CHAPTER 4. MODELLING WITH CYCLONE

Table 4.6: North LRT Cycle Time Analysis

Activity Quantity Prod. Work Mob. Total


Time Time Time
Probe/drainage holes 3.90 m 20 m/hr 11.70 0.00 11.70
Excavation 17.00 m3 40 m3 /hr 25.50 15.00 40.50
Mucking 23.80 m3 50 m3 /hr 28.56 0.00 28.56
Survey/map N/A N/A 20.00 15.00 35.00
1st layer shotcrete 1.60 m3 8 m3 /hr 12.03 15.00 27.03
Face dowel 0.00 m 60 m/hr 0.00 0.00 0.00
1st layer WWF N/A N/A 30.00 15.00 45.00
Lattice girders N/A N/A 30.00 15.00 45.00
2nd layer shotcrete 2.31 m3 8 m3 /hr 17.32 15.00 32.32
2nd layer WWF N/A N/A 30.00 15.00 45.00
3rd layer shotcrete 0.59 m3 8 m3 /hr 4.41 15.00 19.41
Totals 209.52 120.00 329.52

Figure 4.27: CYCLONE North LRT Model


4.4. NORTH LRT CASE STUDY 107

4.4.4 CYCLONE Modelling


The model used is presented in Figure 4.27 and is based on the cycle time
analysis provided in Table 4.6. Resources shown in the model include: one
mining crew, one backhoe, one truck, one survey crew and one shotcrete
machine. The mining crew resource is used throughout the entire process.
One run was necessary for this simulation, and the counter was congured
to terminate the simulation after 260 meters had been excavated. As can
be seen from this model, the mining crew completes the entire process, and
links with the 4 other resources throughout its run. These resources each
have their own cycles and are each only being used at one portion of the
tunnelling process.

4.4.5 Results
The total time that it took to complete the simulation was 1,427.92 hours.
Productivity of the overall system was 0.18 meters per hour, or 1.44 meters
per 8-hour shift. On overage, it takes 329.52 minutes to complete one cycle,
which matches the value shown in Table 4.6, conrming that the model
is valid. With regards to resources in the model, while the mining crew
had no waiting time, the other resources waited a substantial amount of
time. The backhoe waited approximately 88% of the time, the truck waited
approximately 91% of the time, the surveyor waited approximately 89% of
the time, and the shotcrete machine waited on average 76% of the time.
The utilization of the resources, with the exception of the mining crew, was
therefore very low. See Figure 4.28 for a full statistics report.

4.4.6 Embellishment One


Our rst embellishment transforms the provided times into triangular distri-
butions. This is done by using the original value as the mode, subtracting
15% to obtain the low value and adding 20% to obtain the high value. Now
that our model is stochastic, we will execute 15 runs to obtain the results.
When executed, the total time it took to complete the simulation was, on
average, 1,452.05 hours with a low time of 1,449.57 hours and a high time of
1,455.98 hours. Productivity of the system had a mean of 1.44 meters per
8-hour shift, with a negligible variance between runs. On average, it takes
335.09 minutes to complete one cycle with a small variance (0.166 minutes)
108 CHAPTER 4. MODELLING WITH CYCLONE

between runs. Statistics for waiting time are similar to the previous model.
We therefore conclude that the model is still valid and that the assumption
regarding the constant duration is acceptable given the model's performance
(not changing much in the results as the variances are small).

4.4.7 Embellishment Two


Our second embellishment incorporates a breakdown of the backhoe in the
original model. The interval between breakdowns is modelled using an expo-
nential distribution with a mean of 56 hours (3,360 minutes) and the repair
duration is modelled using a triangular distribution with a low value of 408

Statistics Report
Date: Wednesday, January 14, 2015
Project: Model
Scenario: Scenario1
Run: 1 of 1

Non-Intrinsic Statistics
Element Mean Standard Observation Minimum Maximum
Name Value Deviation Count Value Value
Scenario1 (Termination Time) 85,675.200 0.000 1.000 85,675.200 85,675.200

Intrinsic Statistics
Element Mean Standard Minimum Maximum Current
Name Value Deviation Value Value Value
Backhoe (PercentNonEmpty) 0.877 0.328 0.000 1.000 1.000
Mining Crew (PercentNonEmpty) 0.000 0.000 0.000 1.000 0.000
Shotcrete (PercentNonEmpty) 0.761 0.426 0.000 1.000 0.000
Surveyor (PercentNonEmpty) 0.894 0.308 0.000 1.000 1.000
Truck (PercentNonEmpty) 0.913 0.281 0.000 1.000 1.000

Counters
Element Final Overall Average First Last
Name Count Productivity Interarrival Arrival Arrival
Counter 260.000 0.003 329.520 329.520 85,675.200

Waiting Files
Element Average Standard Maximum Current Average
Name Length Deviation Length Length Wait Time
Backhoe 0.877 0.328 1.000 1.000 287.953
Mining Crew 0.000 0.000 1.000 0.000 0.000
Shotcrete 0.761 0.426 1.000 0.000 83.587
Surveyor 0.894 0.308 1.000 1.000 293.698
Truck 0.913 0.281 1.000 1.000 300.003

Figure 4.28: CYCLONE North LRT Statistics Report


4.4. NORTH LRT CASE STUDY 109

Figure 4.29: CYCLONE North LRT Model with Breakdown


110 CHAPTER 4. MODELLING WITH CYCLONE

minutes, a high value of 576 minutes, and a mode of 480 minutes. The
model is shown in Figure 4.29, with the breakdown cycle highlighted in red.
This model took a total of 1,592.25 hours to complete. The backhoe waited
on average 75% of the time, the truck waited on average 92% of the time,
the surveyor waited on average 89% of the time, and the shotcrete machine
waited on average 78% of the time. Both the crew and the breakdown did
not have any waiting time. Results for this remain similar to those seen in
the previous two models.
Chapter 5
General Purpose Modelling
Recall that simulation in the context of this book was dened as:

the use of computer software (e.g., Simphony) to represent the


dynamic responses of a construction system by the behaviour of
a model made to represent it. A simulation uses mathematical
descriptions, graphical constructs, computer algorithms (as well
as other means) that are generally encapsulated in a simulation
software model to represent the real system.

The construction system is dened as:

any portion of the construction world (i.e., facility, environment,


project, resources, etc.) that has been chosen for studying the
changes that take place within it in response to varying stimuli,
for documenting its dynamic behaviour, or optimizing its perfor-
mance.

A simulation model is dened as:

a composition of objects (often associated with graphical nota-


tions) that represent an abstraction of the construction system.
The abstraction is generally in the form of concepts that describe
the elements of the system and its behaviour that are relevant to
the model as determined by the simulationist. The collection of
objects is used to help us describe the system, study and under-
stand it and simulate its behaviour.

111
112 CHAPTER 5. GENERAL PURPOSE MODELLING

In the previous discussion, we learned about building simulation models


using CYCLONE. The objects we used to build the models were boxes and
circles denoting tasks and queues, respectively. We abstracted a production
process (like the earthmoving or the SEM tunnelling examples) by describ-
ing the various tasks that are required to complete each process and the
resources required for each of those tasks. The modelling elements and the
interaction of the resources provided models that when simulated can help
us understand the underlying system behaviors, experiment with how they
respond to various changes and estimate statistics for decision making.
There are many documented advantages of CYCLONE, but these ad-
vantages mostly stem from the ease by which the system can be learned
and a model can be created. CYCLONE models are eective for repetitive
processes that are not too complicated.
In this chapter we learn about another approach for modelling dynamic
systems through the composition of more complex objects that can describe
more details of the underlying system. This approach is referred to as General
Purpose modelling. This means that with this approach we can build models
for a variety of applications and varying degrees of complexity.

5.1 Building a Simple Model


In this chapter, we will learn how to build a simulation model using Gen-
eral Purpose simulation modelling objects with Simphony. The approach we
will use is known as process interaction simulation modelling, where a model
is composed of basic building blocks called modelling elements. Each ele-
ment describes a specic modelling situation. When elements are connected
together, they form a representation of a construction process. Modelling
elements are generally connected with arrows that represent the directional
ow of virtual entities in the model. In general terms, models include the
following basic building blocks:

1. modelling elements,

2. entities,

3. directional arrows, and

4. containers to hold information.


5.1. BUILDING A SIMPLE MODEL 113

Modelling elements vary from one simulation system to another, but most
simulation systems include elements that represent work tasks, queuing of
entities and collections of statistics.
Entities are virtual objects that are essential to modelling dynamic sys-
tems such the ones we are interested in. The entity may represent a customer
requiring service (e.g., a truck that requires loading); or a communication
message between various elements to regulate ow in the model (e.g., all pre-
cast material required to start installation has been delivered; send a signal
to the installation sub-model that installation can commence).
To build a model, we generally need to describe the life-cycle of the entity
as it navigates from one modelling element to the next in the model. In
general terms, when a model is created, one should be able to describe the
general work ow of the real construction process by simply following the
journey of the entity within the model.
Directional arrows describe the direction the entity follows in the model.
The entity originates from one modelling element and generally ows to an-
other element as per the direction of the arrow connecting the elements.
Containers, as the name implies, are used to hold information pertinent
to the model, but where entities generally do not go. An example of this
may be a container to hold statistics, which might be required to dene
what statistics need to be collected by other elements in the model. When
observations are collected by other elements, they are simply stored in this
container to analyze after the simulation is complete.
A dynamic process interaction simulation model similar to the ones of
interest to us in this book is created by virtue of creating entities, routing
them between dierent modelling elements over time, and observing and
recording the changes in the system, until the simulation stops. In essence,
we build a simulation model by virtue of creating entities and following their
lifecycle.

5.1.1 Example: An Excavation Process


As an example of General Purpose simulation modelling using Simphony,
let's model an excavation process, as shown in Figure 1. To begin with, we
need to understand the objective of the simulation and then understand the
system itself (the excavation process, in this case). Assume the objective is
to determine the production of the system and the utilization of the loader.
The process itself is self-explanatory for this simple problem, as illustrated in
114 CHAPTER 5. GENERAL PURPOSE MODELLING

Figure 5.1: Schematic of an Excavation Operation

Figure 5.2: Model of an Excavation Operation

Figure 5.1. Trucks load dirt at an excavation location, travel to a dumpsite,


dump their load and return for more loads. The trucks are loaded by one
excavator at the loading location.
To create a model, rst think of the entity that we are interested in
following. Since we are interested in the production of the system, we are
probably interested in the loads of dirt being hauled. We can therefore dene
the entity as a truck-load of dirt or a cubic meter of dirt. Let us assume it is
the truck-load of dirt (we will refer to this entity as truck"). If we observe the
lifecycle of the truck-load of dirt and follow it until the project is complete, we
will be able to estimate production of the process, as well as other statistics
such as the loader utilization.
In building the model, we start with the model shown in Figure 5.2. We
start by dening the truck as the main entity in the model, since it can be
thought of as a customer requiring service. To do this, we use a Create"
5.1. BUILDING A SIMPLE MODEL 115

element (the circular element in the model) and congure it to create a truck
at the start of simulation. Next, we set up the elements required to model
the life cycle of the truck. First, the entity (truck) should load dirt. The
loading activity can be modelled with a Task" element (the square elements
in the model). The loading task has one server specied since we only have
one excavator. This type of constrained" task forces trucks to wait in a
queue if another truck is being served by the excavator at the time it arrives.
When the server becomes available, the trucks in the queue will be served
on a rst-come rst-served basis. Once the truck nishes loading, it passes
on to another Task" element, which models the travel of the truck to the
dump site. This task is not constrained by how many trucks are traveling,
and as such, it has an unlimited number of servers and there will be no
queuing at the task. We call this type of task unconstrained." Once the
travel is complete, the truck proceeds to another Task" element that models
the dumping process. Again, this task is unconstrained. Once dumping is
complete, the truck passes through a Counter" element (the small circle
with a ag on top), that records production (a truckload of 16 m3 has been
produced). After passing through the counter, the truck passes on to another
unconstrained Task" element that models the return trip. Finally, the truck
is routed back to the loading task to begin another cycle.

5.1.2 Primary Elements


This simple model demonstrated the functionality of three key modelling
elements. We discuss them below.

The Create Element


A Create element is responsible for creating entities and intro-
ducing them into the model. It has the capability of introducing
a number of entities all at once at the start of simulation, or it can
introduce single entities at random intervals during simulation.
Each Create element has the following properties:
First (input): The simulation time at which the rst entity will be created.
Normally this property will be set to zero, but you can modify it to
introduce a delay before the rst entity arrives. The value of this
property can be a probability distribution, thus introducing the rst
entity at a randomly determined time.
116 CHAPTER 5. GENERAL PURPOSE MODELLING

Interval (input): The time interval between subsequent entity creations.


For example, if the rst property is set to zero and this property is
set to 10, entities will be created at the following times: 0, 10, 20, 30,
40, etc. It is permissible to set this property to zero, in which case all
the entities will be created together at the time specied by the rst
property. Like the rst property, the value of this property can be a
probability distribution, in which case entities will not arrive in equal
steps.

Quantity (input): The total number of entities to create. Once the element
has created this many entities it will cease introducing entities into the
model.

Created (output): The total number of entities that were created during
simulation. Note that it is possible for this number to be less than the
value of the Quantity property if, for example, the simulation termi-
nated early. It will never be larger than the Quantity property.

The Task Element


A Task element is responsible for modelling an activity. It
achieves this by holding the entity for a period of time, as spec-
ied in its duration property. If the task has a limited number
of servers, it is associated with its own queue le where entities
are forced to wait when all servers are occupied with other en-
tities. If it is unconstrained, it simply releases the entity after the specied
duration. A Task element has the following properties:

Type (input): The type of task (constrained/unconstrained).

Duration (input): The duration of the activity. The time can be constant,
random, or a mathematical function as requiredthough you should
make sure that the value is never negative. Be especially wary of prob-
ability distributions that are unbound below (the normal distribution
for example). Simphony will issue a warning if you specify such a dis-
tribution.

ReportStatistics (input): A boolean value indicating whether or not the


Task should appear in the Statistics Report that Simphony generates.
5.1. BUILDING A SIMPLE MODEL 117

Servers (input): The number of servers available to the activity. This


property is only of interest if the task is constrained.

File Length (statistic): An intrinsic (time-dependent) statistic describing


the length of the activity's internal queue over time. This property is
only of interest if the task is constrained.

Utilization (statistic): An intrinsic (time-dependent) statistic describing


the utilization of the activity's servers over time. This property is only
of interest if the task is constrained.

Waiting Time (statistic): A non-intrinsic statistic describing the amount


of time that an entity needed to wait for a server. This property is only
of interest if the task is constrained.

The Counter Element


A Counter element is used to record important milestones in the
lifecycle of an entity. It simply counts the number of entities
passing though and the simulation time at which they are ob-
served. Normally, it is used to record production in the model
and should be strategically positioned to reect the completion
of a unit of production. Every Counter element has the following properties:

Initial (input): The initial value of the Counter. This property is generally
set to zero.

ReportStatistics (input): A boolean value indicating whether or not the


Counter should appear in the Statistics Report that Simphony gener-
ates.

Step (input): The amount the Counter should be incremented with each
passing entity. Normally, this property is set to 1; however, it is some-
times useful to set it to a value that represents the capacity" of the
passing entity. For example, in an earthmoving model it could be set to
the capacity of the truck (16 m3 ) so that the counter is counting cubic
meters of dirt delivered rather than the truck cycles. It is possible to
set this property to a negative value so that the counter is counting
down.
118 CHAPTER 5. GENERAL PURPOSE MODELLING

Limit (input): The count value at which simulation will be terminated.


Leave this property set to zero if the counter is not limiting.

Count (output): The number of entities that passed through the Counter
during simulation.

Time (output): The simulation time at which the most recent passing en-
tity was observed. Note that this need not be the time at which simu-
lation nished, although if the Counter was responsible for terminating
simulation (i.e., the limit was reached), it will be.

Production (statistic): A non-intrinsic (time-independent) statistic de-


scribing the count with respect to simulation time.

ProductionRate (statistic): A non-intrinsic (time-independent) statistic


describing the ratio of the count and the simulation time.

5.1.3 Examining Results


The best way to learn simulation is to visualize the dynamics of the model.
Unlike a CPM network where you would normally follow the logic of the
model by following the activities in the network, with a simulation model,
you have to learn to follow the journey of an entity as it traverses various
elements in the model. For the simple model we created, you should be able
to look at the model and see that a truck is created at the Create element; the
truck then goes to load. Since the loading task has a limited number of servers
(one excavator in this case), the truck checks to see if the server is available
for loading or busy with other entities. When the excavator is available, the
truck starts loading. After 7 minutes elapse, the loading is complete and the
truck starts traveling to the dump site. The travel activity is unconstrained,
and therefore, the truck simply commences the task, which is scheduled to
complete in 17 minutes. Once the truck has completed traveling, it passes
on to the dump task, which takes 3 minutes to complete. When it completes
this task, it passes through the production counter and records that one
truckload of dirt (16 m3 ) has been produced. The truck then proceeds to
the return task, which will take 13 minutes. Finally, the truck returns for
another load and the process continues until certain conditions are met (in
our case, when 10,000 m 3 of overburden have been delivered).
5.1. BUILDING A SIMPLE MODEL 119

Now that the model is complete, we can run a simulation. Suppose we


would like to nd out how many hours it takes to deliver 10,000 m3 . We set
up this model to make it simple and intuitive. The loading time is 7 minutes
and the back cycle time is 33 minutes, so the truck completes one cycle every
40 minutes. To deliver 10,000 m3 requires

10, 000 m3 ÷ 16 m3 = 625 truckloads,

so it should take

625 × 40 min − 13 min (the nal return trip is not included)


= 24, 987 min = 416.45 hrs.

When the model is simulated inside Simphony, the production counter re-
ports that 24,987 minutes elapsed before the nal cubic meter was observed.
The production counter also reports that the overall productivity of the sys-
tem was 0.400 m3 per minute which works out to 24 m3 per hour. These
results from Simphony conrm that the model is accurate and logical to fol-
low, but we can note that all we have done is added up the times it takes
to complete each of its tasks! This leads us to a question: do we actually
need a simulation model for this process, or would spreadsheet calculations
be sucient? The answer will become evident as we return to this scenario
throughout the chapter.
For now, let's make our model more realistic. Suppose that we have 7
trucks in our process (all the same size for now16 m3 ), and one excavator
for excavation and loading. Although we can still manage to compute the
production time using simple arithmetic, we will now have to account for
queuing at the excavator which may complicate matters (we recommend
that the reader attempt this calculation to verify the potential benets).
According to the data obtained at the site, the loading time is 7 minutes and
the average back cycle time is 33 minutes. The total quantity of dirt that
should be removed is 10,000 m3 or 625 truckloads.
The simulation model above can be quickly adjusted to reect the new
situation. This is achieved by simply changing the properties of the aected
elements in the model. In this case, the Create element needs to be recon-
gured to create 7 entities (trucks) at the start of simulation instead of just
one. This time, when the simulation is run, the results are as follows:
ˆ Final simulation time: 4395 min = 73.25 hrs.
120 CHAPTER 5. GENERAL PURPOSE MODELLING

ˆ Overall productivity: 2.275 m3 /min = 136.5 m3 /hr.

ˆ Minimum waiting time to load: 0 min.

ˆ Average waiting time to load: 9.134 min.

ˆ Maximum waiting time to load: 42 min.

ˆ Average length of truck queue: 1.307 trucks.

ˆ Utilization of excavator: 100%.

As before, the simulation time and overall productivity are reported by the
production counter. The remaining statistics are reported by the loading
task. Using this simple model, we can investigate many aspects of the pro-
cess, including estimating production rates, balancing equipment, and study-
ing the impact of various factors on production.
We shall make one last improvement before we leave the simple intro-
ductory model we've created. Construction activities are generally uncertain
in their timing. The back cycle time of the truck will not always be 33
minutes, even for the same route. We can model variability in duration of
work tasks using probability distributions. In our example, we can use an
exponential distribution with a mean of 7 minutes to model the loading task,
while the travel time may have a triangular distribution with a minimum of
14 minutes, a maximum of 22 minutes, and a most likely value of 17 min-
utes (similar to a PERT duration estimate). We'll leave the duration of the
dumping and return tasks at the constant values for now. This gives us a
more accurate representation of our real construction process (although still
simplied). The results from the revised model are shown below:

ˆ Final simulation time: 4886 min ≈ 81.44 hrs.

ˆ Overall productivity: 2.047 m3 /min = 122.82 m3 /hr.

ˆ Minimum waiting time to load: 0 min.

ˆ Average waiting time to load: 13.835 min.

ˆ Maximum waiting time to load: 76.500 min.

ˆ Average length of truck queue: 1.999 trucks.


5.2. HAND SIMULATION 121

Production Rate (m3 /min)


2.5

2.0

1.5

1.0

0.5

0.0
0 1,000 2,000 3,000 4,000
Simulation Time (min)

Figure 5.3: Production Rate vs. Simulation Time

ˆ Utilization of excavator: 90.1%.

Note that because the simulation model is no longer deterministic, the results
would almost certainly be dierent if the model were run again.
Simulation models are fairly rich in information related to the process. For
example, the production counter tracks production throughout the simulated
time. The chart in Figure 5.3 shows how the production rate changed as
the simulation progressed. The chart demonstrates that it took the process
roughly 600 minutes (a little over 10 hours) to reach a steady state of around
2.05 m3 per minute.

5.2 Hand Simulation


Simulation models are generally processed by a computer using a simulation
algorithm, which in our case is a discrete event processing algorithm. When
a simulation model is run on computer, a number of things happen behind
the scenes, which are all initiated and managed by a simulation engine. The
objective of discussing hand simulation is to give you an understanding of
what is going on inside this simulation engine.
The discrete event processing algorithm is based on the concepts of events,
entities, and simulation time. An event is dened as an occurrence that
122 CHAPTER 5. GENERAL PURPOSE MODELLING

causes the state of the simulation system to change. For example, an event
might be a truck in an earthmoving simulation arriving at the dump site,
thus causing the state of the truck to change from hauling" to dumping";
it may be a welder in a pipe spool fabrication model beginning work on a
spool, thus changing the state of the spool from tting" to welding" and,
at the same time, causing the welder to become unavailable to other spools.
An entity is the primary object associated with an event. In these examples,
the entities are the truck and the spool, respectively. Simulation time is the
time at which events occur.
A discrete event simulation engine is responsible for scheduling and pro-
cessing these events. Scheduling events is the process of the simulationist
informing the simulation engine of precisely when an event will occur. To do
this, the simulationist needs to tell the simulation engine three things:

1. The entity associated with the event (e.g., the particular truck or spool
to which the event applies),

2. What event is going to occur (e.g., a truck will arrive at the dump site
or a welder will become available to work on a spool), and

3. The simulation time at which the event will occur.

The processing of an event happens when the simulation engine advises the
simulationist that the time has come for a previously scheduled event to
occur. When an event is processed, the simulation engine will tell you the
same three pieces of information that were specied at the time the event
was scheduled, namely, what event is being processed, the entity associated
with the event, and the simulation time. In response to this information, the
simulationist will typically update the state of the system and/or schedule
further events. Note that it is not permissible to schedule an event with a
simulation time prior to the event being processed (i.e., time doesn't run
backwards!).
In order to accomplish these responsibilities, a discrete event simulation
engine requires two things: a list of scheduled events (ordered by simulation
time), and a simulation clock. The list of scheduled events keeps track of
those events that have been scheduled but not processed. When an event is
scheduled it is inserted into the list at the correct location, and when it is
processed it is removed from the list. The simulation clock keeps track of the
5.2. HAND SIMULATION 123

Record intrinsic
Start Set TNOW = 0 statistical
observations

no

Move entities to Can an activity Is the event yes


1 End
the activity yes begin? list empty?

no

Transfer the
Generate a
2
earliest event
duration ∆
on the event list
(possibly
to the
stochastic) for
chronological
the activity
list

Set TNOW to
Calculate the
the time of the
event time: NOTES:
transferred
TNOW + ∆
event 1. A Task can begin
if a prior
modelling
element has
released an entity.

2. In the case of a
tie, the earliest
event is
Record the Release entities
considered to be
event in the from the
the one that was
event list activity
scheduled (i.e.,

Generation Phase Advancement Phase recorded) rst.

Figure 5.4: Hand Simulation Algorithm


124 CHAPTER 5. GENERAL PURPOSE MODELLING

current simulation time. It is initialized to zero at the start of simulation, and


thereafter is set to the simulation time of the event most recently processed.
Hand simulation is the process of emulating a discrete event simulation
engine using paper and pencil. Like any discrete event simulation engine,
this one requires a list of events and a simulation clock. On paper, the list
of simulation events is maintained by using two columns: the rst, typically
labelled Events," records events as they are scheduled; the second, typically
labelled Chronological," records events as they are processed. The simula-
tion clock is emulated by using a third column, typically labelled  TNOW ."
The algorithm used to emulate a discrete event simulation engine by hand
is shown in Figure 5.4. We will illustrate how the algorithm works by looking
at two examples.

5.2.1 Example: A Simple Earthmoving Model


Let's consider the model of a simple earthmoving operation shown in Fig-
ure 5.5. In this operation, two trucks are loaded by a single front-end loader,
they travel to a dump site, dump, and return to begin the cycle anew. It
takes 7 minutes for the loader to load a truck, 17 minutes for a truck to travel
to the dump site, 3 minutes for a truck to dump, and 13 minutes for a truck
to make the return journey. A General Purpose model of the operation is
shown in Figure 5.6.
We will assume that the Create element creates the two trucks at the
start of the simulation, that the Task element representing the loading ac-
tivity is constrained to a single server (loader), and that the other tasks are
unconstrained. The entities in this model are obviously the trucks. From a
process perspective, the two trucks are identical; however, from the perspec-
tive of the simulation engine, they are distinct entities, so in what follows,
we'll refer to one truck as A and the other as B.
To begin, take a sheet of paper and at the top write headings for the
following columns:  TNOW ," Events," Chronological," and Prod." Note
that the fourth column Prod" is not a part of the simulation engine; instead
we'll be using it to track the productivity of our system each time an entity
ows through the production counter. The rst step of the algorithm is to
set TNOW = 0, so under the heading  TNOW " write the number zero. Your
paper should look something like this:
TNOW Events Chronological Prod
0
5.2. HAND SIMULATION 125

Figure 5.5: Schematic of a Simple Earthmoving Operation

Figure 5.6: General Purpose Model of a Simple Earthmoving Operation


126 CHAPTER 5. GENERAL PURPOSE MODELLING

The next step of the algorithm asks whether a task can begin. The answer
is yes; both trucks can begin the loading task as they have just been created.
Let's assume that truck A goes rst, so it is moved to the loading task. The
duration of this task is 7 minutes and TNOW is currently 0, so the time at
which the task will nish is TNOW + ∆ = 0 + 7 = 7. We record this under
the Events" column; the paper now looks like this:

TNOW Events Chronological Prod


0 A loaded @ 7

We now move back to the question of whether a task can begin. This time
the answer is no; truck A is busy being loaded, and truck B cannot begin the
loading task, because the task is constrained and the only available server is
busy with truck A. We therefore move to the next question: is the event list
empty? Again, the answer is no; the event we just recorded is in the list. We
now need to scan the event list for the earliest event, which is easy as there
is only one event. We copy this event into the Chronological List" column
and cross it out from the Events" column. Next, we need to set TNOW to
the time of this event, so we cross out the 0 in the  TNOW " column and write
a 7 underneath. Our sheet of paper now looks like this:

TNOW Events Chronological Prod


0 A loaded @ 7  
7 A loaded @ 7

At this point, the task of loading truck A is complete and the truck
is released from the loading task. We now return to the question: can a
task begin? This time the answer is yes; truck A can begin the travel task
and truck B can begin the loading task. It does not matter which truck
we choose to deal with rst (the algorithm will produce the same results
in either case), so let's pick truck B. First, we move the entity representing
truck B to the loading task, and then we calculate the nish time, which is
TNOW + ∆ = 7 + 7 = 14. We record the event in the event list. Again we ask
the question: can a task begin? The answer is yes, as we still need to deal
with truck A traveling. The entity representing truck A is now moved to the
travel task, and the nish time is calculated to be TNOW + ∆ = 7 + 17 = 24.
This event is also recorded under the Events" column. Our sheet of paper
now looks like this:
5.2. HAND SIMULATION 127

TNOW Events Chronological Prod


0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7
A arrives @ 24

Again, we ask the question: can a task begin? This time the answer is no;
truck A is busy traveling and truck B is being loaded. In addition, the event
list isn't empty so we need to scan the list for the earliest event, which is the
completion of loading truck B. We cross this event out under the Events"
column and copy it to the Chronological" column. In addition, we update
TNOW to 14. Our sheet of paper now looks like this:
TNOW Events Chronological Prod
0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14

The loading task is now complete and the entity representing truck B is
released. We return to the question: can a task begin? This time the answer
is yes; truck B can begin the travel task. We move the entity to the travel
task and calculate the nish time as TNOW + ∆ = 14 + 17 = 31. We record
this new event under the Events" column:
TNOW Events Chronological Prod
0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14
B arrives @ 31

This time, when we ask if a task can begin, the answer is no; both trucks
are in the process of traveling to the dump site. Scanning the event list, we
see that the arrival of truck A at the dump site is the earliest event, so it is
crossed out from the Events" column and transferred to the Chronological"
column. TNOW is then updated to 24:
TNOW Events Chronological Prod
0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14 
24 B arrives @ 31 A arrives @ 24

The travel task is now complete and the entity representing truck A is
released. At this point, truck A can begin the dump task. The nish time
for this task is TNOW + ∆ = 24 + 3 = 27, and an event is recorded:
128 CHAPTER 5. GENERAL PURPOSE MODELLING

TNOW Events Chronological Prod


0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14 
24 B arrives @ 31 A arrives @ 24
A dumped @ 27

There are no other tasks that can begin at this time, so the event list is
scanned for the earliest event. This turns out to be the event we just sched-
uled, so that event is crossed out from the Events" column and transferred
to the Chronological" column. TNOW is then updated to 27 and the entity
representing truck A is released. At this point, the truck A entity passes
through the production counter, and we record the production in the Prod"
columnwe've moved 1 truckload in 27 minutes. Once it has passed the
production counter, truck A can begin the return task. The nish time is
TNOW + ∆ = 27 + 13 = 40, and the event is added to the Events" column.
Our sheet of paper now looks like this:

TNOW Events Chronological Prod


0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14 
24 B arrives @ 31 A arrives @ 24 
27 A dumped @ 27 A dumped @ 27 1/27
A returns @ 40

As no other tasks can begin and the event list is not empty, we need
to scan the event list for the earliest event. This is the arrival of truck B
at the dump site. The event is removed from the event list, added to the
chronological list, and TNOW is updated to 31. Truck B is now released
from the travel task and it can begin the dump task. The nish time is
TNOW + ∆ = 31 + 3 = 34, and the event is added to the event list. Our sheet
of paper now looks like this:

TNOW Events Chronological Prod


0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14 
24 B arrives @ 31 A arrives @ 24 
27 A dumped @ 27 A dumped @ 27 1/27
31 A returns @ 40 B arrives @ 31
B dumped @ 34
5.2. HAND SIMULATION 129

Once again, there are no other tasks that can begin, and there are still
events to process. The earliest event in the event list is the one we just
scheduled, so it is removed, transferred to the chronological list, and TNOW
is updated to 34. Truck B is now released from the dump task and passes
through the production counter. As with truck A, we record the production
in the production columnwe've now managed to move 2 truckloads in 34
minutes. After passing the production counter, truck B can begin the return
task, which has a nish time of TNOW + ∆ = 34 + 13 = 47. Once this event
is added to the event list, our paper looks like this:

TNOW Events Chronological Prod


0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14 
24 B arrives @ 31 A arrives @ 24 
27 A dumped @ 27 A dumped @ 27 1/27
31 A returns @ 40 B arrives @ 31 
34 B dumped @ 34 B dumped @ 34 2/34
B returns @ 47

Again, there are no other tasks that can begin and there are still events
to process. The earliest event is the return of truck A to the loading site,
so this event is removed, transferred to the chronological list, and TNOW is
updated to 40. Truck A is released from the return task and can now begin
the load task, which has a nish time of TNOW + ∆ = 40 + 7 = 47. Once this
event is added to the event list, our paper looks like this:

TNOW Events Chronological Prod


0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14 
24 B arrives @ 31 A arrives @ 24 
27 A dumped @ 27 A dumped @ 27 1/27
31 A returns @ 40 B arrives @ 31 
34 B dumped @ 34 B dumped @ 34 2/34
40 B returns @ 47 A returns @ 40
A loaded @ 47

No further tasks can begin and the event list still contains events, so we
need to scan it for the earliest event. This time the result is a tie; both trucks
are scheduled to complete their tasks at time 47. We need to make use of
our tie-breaking procedure, which states that in the case of a tie, the earliest
130 CHAPTER 5. GENERAL PURPOSE MODELLING

event is the highest on the list (which will be the event that was recorded
rst). Thus, we will process the return of truck B to the loading site rst.
This event is removed from the event list, transferred to the chronological
list, and TNOW is updated to 47. Truck B is now released from the return
task; however, it cannot begin the load task, as the load task is constrained
and the only server is still busy with truck A. Our paper now looks like this:
TNOW Events Chronological Prod
0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14 
24 B arrives @ 31 A arrives @ 24 
27 A dumped @ 27 A dumped @ 27 1/27
31 A returns @ 40 B arrives @ 31 
34 B dumped @ 34 B dumped @ 34 2/34
40 B returns @ 47 A returns @ 40 
47 A loaded @ 47 B returns @ 47

At this point no further tasks can begin. There is only one event in
the event list (the completion of loading truck A), so it is transferred to
the chronological list. TNOW does not need to be updated as its value has
been updated to 47 previously, but we transfer its value to the next row
nevertheless. Truck A is released from the loading task and two tasks can
now begin: the loading of truck B and the traveling of truck A. Once these
tasks are scheduled, our sheet of paper will look like this:
TNOW Events Chronological Prod
0 A loaded @ 7  
7 B loaded @ 14 A loaded @ 7 
14 A arrives @ 24 B loaded @ 14 
24 B arrives @ 31 A arrives @ 24 
27 A dumped @ 27 A dumped @ 27 1/27
31 A returns @ 40 B arrives @ 31 
34 B dumped @ 34 B dumped @ 34 2/34
40 B returns @ 47 A returns @ 40 
47 A loaded @ 47 B returns @ 47 
47 B loaded @ 54 A loaded @ 47
A arrives @ 64

Both trucks have now completed a full cycle. We leave it as an exercise


for the reader to continue the simulation for another cycle.
There are a couple of things worth noting about our results. First, the
 TNOW " column clearly shows the variability of the time step: sometimes
5.2. HAND SIMULATION 131

TNOW increased by 7, sometimes by 10, and once it didn't increase at all (since
there were two events to process with the same simulation time). This is
entirely characteristic of discrete event simulation. Second, the chronological
list provides us with a story line of what happened during simulation. Even
though the various events were not scheduled in chronological order, the
algorithm ensures that they are processed in the correct order.

5.2.2 Example: A Concrete Batch Plant


Next we'll consider the model of a concrete batch plant shown in Figures 5.7
and 5.8 and in Table 5.1. This model is a simple queuing system. We will
assume that the Create element will create 6 entities representing ready-mix-
concrete trucks. The rst will be created at time zero, and subsequent trucks
will have the following interarrival times: 5 minutes, 5 minutes, 16 minutes,
38 minutes, and 13 minutes. The batch plant is a constrained task that can
only service one truck at a time, so if a truck arrives while the batch plant is
busy, it will be forced to queue. The batch plant will take 16 minutes to ll
the rst truck, and subsequent trucks will take 15 minutes, 10 minutes, 13
minutes, 21 minutes, and 15 minutes to ll, respectively. Once a truck has
been lled, it departs from the system and is no longer of interest.
In this example, we are going to be interested in collecting the following
pieces of information:

1. the utilization of the batch plant,

2. the average length of the queue, and

3. the average waiting time of a truck in the queue.

Which means that we are going to need six columns: the three required to
emulate the simulation engine, and three more to track the statistics. As
before, we begin by writing a 0 under the TNOW column:

TNOW Events Chronological Util Queue Wait


0

Next, we ask whether an activity can begin; the answer is yes, the rst
truck can arrive. The nish time for this activity is TNOW + ∆ = 0 + 0 = 0.
When this event has been added to the Events" column, our sheet of paper
looks like this:
132 CHAPTER 5. GENERAL PURPOSE MODELLING

Figure 5.7: Schematic of a Concrete Batch Plant

Figure 5.8: General Purpose Model of a Concrete Batch Plant

Table 5.1: Concrete Batch Plant Timing

Truck Arrival Service


No. Time Time
1 0 min 16 min
2 5 min 15 min
3 10 min 10 min
4 26 min 13 min
5 64 min 21 min
6 77 min 15 min
5.2. HAND SIMULATION 133

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0

At this point, no other activities can begin, so we scan the event list
for the earliest event. The only event on the event list is the one we just
scheduled: the arrival of the rst truck. We now cross out this event from
the Events" column and add it to the Chronological" column, and then
update TNOW to the time of the event. At this point, the rst truck begins
loading, so the batch plant is not utilized. In addition, there are no trucks
in the queue waiting, and the rst truck did not have to wait to be loaded.
We record these observations in the statistical columns. When we're done,
our sheet of paper will look like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0    
0 T1 arrives @ 0 100% 0 0

The loading of the rst truck has begun, so we need to schedule an event
for it. The nish time for the event is: TNOW + ∆ = 0 + 16 = 16. In addition,
we need to schedule the arrival of the next truck. The nish time for that
event is: TNOW + ∆ = 0 + 5 = 5. With these two events added, our paper
looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
T2 arrives @ 5

No further activities can begin, so we scan the event list for the earliest
event. This is the arrival of the second truck at simulation time 5. We
transfer this event to the chronological list, and update TNOW to 5. This
time, the truck cannot begin loading immediately as the batch plant is still
busy with the rst truck, so it will have to wait, and our queue grows to a
length of 1. After updating the statistics, our paper looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 

The next activity to begin is the arrival of the third truck. The nish
time for this event is TNOW + ∆ = 5 + 5 = 10. Once this event is scheduled,
our paper looks like this:
134 CHAPTER 5. GENERAL PURPOSE MODELLING

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
T3 arrives @ 10

No further activities can begin, so we need to scan the event list for the
earliest event. This turns out to be the arrival of the third truck, so we
transfer that event to the chronological list and update TNOW to 10. As with
the second truck, the third truck cannot begin loading, so it is queued and
our queue grows to a length of 2. After updating the statistics, our paper
looks like this:

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 

At this point, we need to schedule the arrival of the fourth truck. The
nish time for this event is TNOW + ∆ = 10 + 16 = 26. Once this event is
scheduled, our paper looks like this:

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
T4 arrives @ 26

No further activities can begin, so we scan the event list for the earliest
event, which is the departure of the rst truck. We transfer this event to
the chronological list, and update TNOW to 16. Now that it has departed,
the rst truck is no longer of interest to us. The second truck, on the other
hand, is. It can now begin loading, which causes our queue to decrease in
length by 1; in addition, a glance at the chronological list shows us that the
second truck arrived at time 5 and it is now time 16, so the truck waited in
the queue for 16 − 5 = 11 minutes. After recording these statistics, our paper
looks like this:
5.2. HAND SIMULATION 135

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11

As the second truck has begun loading, we should schedule the completion
of this activity. The nish time is TNOW + ∆ = 16 + 15 = 31. With this
event scheduled, our paper looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
T2 departs @ 31

Once again, no activities can begin: the second truck is still loading, and
the fourth truck is yet to arrive. We need to scan the event list for the earliest
event, which turns out to be the arrival of the fourth truck. We transfer this
event to the chronological list and update TNOW to 26. As the batch plant
is busy with the second truck, the fourth truck cannot begin loading, so it is
queued and our queue grows to a length of 2. After updating the statistics,
our paper looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 

We now need to schedule the arrival of the fth truck. This will happen
at TNOW + ∆ = 26 + 38 = 64. With this event scheduled, our paper looks
like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
T5 arrives @ 64
136 CHAPTER 5. GENERAL PURPOSE MODELLING

No further activities can begin, so we scan the event list for the earliest
event, which is the departure of the second truck. We transfer this event to
the chronological list, and update TNOW to 31. After the second truck has
departed and left the system, the third truck can begin loading. This causes
our queue to decrease in length by 1 and, in addition, the chronological list
shows us that the third truck arrived at time 10 and it is now time 31, so the
truck waited in the queue for 31 − 10 = 21 minutes. After recording these
statistics, our paper looks like this:

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21

As the third truck has begun loading, we should schedule the completion
of this activity. The nish time is TNOW + ∆ = 31 + 10 = 41. With this
event scheduled, our paper looks like this:

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
T3 departs @ 41

No further activities can begin, so we scan the event list for the earliest
event, which is the departure of the third truck. We transfer this event to
the chronological list, and update TNOW to 41. After this truck has left
the system, the fourth truck can begin loading. This causes our queue to
decrease in length by 1 and, in addition, the chronological list shows us that
the fourth truck arrived at time 26 and it is now time 41, so the truck waited
in the queue for 41 − 26 = 15 minutes. After recording these statistics, our
paper looks like this:
5.2. HAND SIMULATION 137

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15

As the fourth truck has begun loading, we need to schedule its departure.
This will happen at TNOW + ∆ = 41 + 13 = 54. With this event scheduled,
our paper looks like this:

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
T4 departs @ 54

No further activities can begin, so we scan the event list for the earliest
event, which is the departure of the fourth truck. We transfer this event
to the chronological list, and update TNOW to 54. This time, there are no
further trucks in the queue, so our batch plant becomes idle. Our paper now
looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0 

Once again, no further activities can begin, and the only event on the
event list is the arrival of the fth truck. We transfer this event to the
138 CHAPTER 5. GENERAL PURPOSE MODELLING

chronological list and update TNOW to 64. As the batch plant is currently
idle, the fth truck can begin loading immediately and does not need to be
queued. After updating the statistics, our paper looks like this:

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0 
64 T5 arrives @ 64 100% 0 0

As the fth truck has begun loading, we need to schedule its departure.
The nish time for this event is TNOW + ∆ = 64 + 21 = 85. We also need
to schedule the arrival of the sixth and nal truck. The nish time for that
event is TNOW + ∆ = 64 + 13 = 77. With these two events added, our paper
looks like this:

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0 
64 T5 departs @ 85 T5 arrives @ 64 100% 0 0
T6 arrives @ 77

At this point, no further activities can begin, so we scan the event list
for the earliest event. This is the arrival of the sixth truck at time 77. We
transfer this event to the chronological list and update TNOW to 77. As the
batch plant is busy with the fth truck, the new truck is forced to queue.
After updating the statistics, our paper looks like this:
5.2. HAND SIMULATION 139

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0 
64 T5 departs @ 85 T5 arrives @ 64 100% 0 0
77 T6 arrives @ 77 T6 arrives @ 77 100% 1 

At this point, we do not need to schedule another arrival event as the


sixth truck was the last truck in our problem. Thus, no further activities
can begin and we need to scan the event list for the earliest event. This is
the departure of the fth truck at time 85. We transfer this event to the
chronological list, and update TNOW to 85. With the departure of the fth
truck, the sixth truck can begin loading. This causes our queue to become
empty, and we calculate the wait time for the sixth truck as 85 − 77 = 8
minutes. Our paper now looks like this:

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0 
64 T5 departs @ 85 T5 arrives @ 64 100% 0 0
77 T6 arrives @ 77 T6 arrives @ 77 100% 1 
85 T5 departs @ 85 100% 0 8

As the sixth truck has begun to load, we need to schedule its departure.
This will happen at TNOW + ∆ = 85 + 15 = 100. With this event added, our
paper looks like this:
140 CHAPTER 5. GENERAL PURPOSE MODELLING

TNOW Events Chronological Util Queue Wait


0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0 
64 T5 departs @ 85 T5 arrives @ 64 100% 0 0
77 T6 arrives @ 77 T6 arrives @ 77 100% 1 
85 T6 departs @ 100 T5 departs @ 85 100% 0 8

No further events can begin, so we scan the event list for the earliest event.
The only event on the list is the departure of the sixth truck, so we transfer
that event to the chronological list and update TNOW to 100. As there are
no other trucks to service, the batch plant is now idle. After updating the
statistics, our paper looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0    
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1 
10 T3 arrives @ 10 T3 arrives @ 10 100% 2 
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2 
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0 
64 T5 departs @ 85 T5 arrives @ 64 100% 0 0
77 T6 arrives @ 77 T6 arrives @ 77 100% 1 
85 T6 departs @ 100 T5 departs @ 85 100% 0 8
100 T6 departs @ 100 0% 0 

At this point, no other activities can begin and a scan of the event list
shows that it is empty. The simulation has come to an end.

Calculation of Batch Plant Utilization


We'll begin our analysis of the results of this simulation by calculating the
utilization of the batch plant. On our sheet of paper, we have 12 observations
of the utilization. We might (naïvely) calculate the overall utilization by
adding up these 12 observations and dividing by 12. However, this would be
5.2. HAND SIMULATION 141

an error. Utilization of a server is an example of a statistic that is intrinsic (or


time-based ). An intrinsic statistic is one in which the amount of time that a
particular observation is in place must be taken into account when calculating
the average (or any of the other familiar statistical estimators: variance,
standard deviation, etc.). In other words, we need to take a weighted average.
The observations need to be weighted by the amount of time the statistic held
that particular value.
If we were to plot the utilization of our batch plant with respect to simu-
lation time, we would see that it forms the step function shown in Figure 5.9.

100 %

80 %
Utilization

60 %

40 %

20 %

0%
0 20 40 60 80 100
Simulation Time (min)

Figure 5.9: Batch Plant Utilization vs. Simulation Time

If we denote this step function by f , then the average utilization will be the
area under f divided by the total simulation time:
100
1 × (54 − 0) + 0 × (64 − 54) + 1 × (100 − 64)
Z
1
f (x) dx = = 90%.
100 0 100

Calculation of the Average Length of the Queue


The length of a queue is another example of an intrinsic statistic. As before,
we plot the length of the queue with respect to simulation time and see the
step function shown in Figure 5.10.
142 CHAPTER 5. GENERAL PURPOSE MODELLING

Queue Length
1

0
0 20 40 60 80 100
Simulation Time (min)

Figure 5.10: Batch Plant Queue Length vs. Simulation Time

If we denote this function by g , then the average length of the queue will be
the area under g divided by the total simulation time:
Z 100
1
f (x) dx = 0.55 trucks.
100 0

Calculation of the Average Waiting Time


The average waiting time of an entity in a queue is not an intrinsic statistic,
so we can calculate the average in the usual way:

0 + 11 + 21 + 15 + 0 + 8
≈ 9.17 min.
6

5.3 Modelling Production Systems


So far, the simulation models that we've built have been very simple. We have
been able to model all of our examples so far mathematically and managed
to get results that agree with the simulation models. But what about real-
world problems? They tend to be considerably more detailed and complex.
A useful model of a real-world problem may need to consider some or all of
the following:
5.3. MODELLING PRODUCTION SYSTEMS 143

1. Dierent kinds of entities: In our earthmoving example, all the entities


(trucks) where identical. In reality, a truck eet will consist of dierent
kinds of trucks, and each will have a dierent capacity, hauling speed,
etc.

2. Activities with variable durations: Activities never take the exact same
amount of time every time they're performed. In reality, they take a
varying amount of time that depends on a large number of factors. In
discrete event simulation, activity durations are traditionally modelled
using probability distributions.

3. Interdependency of activities: Activities are often dependent on the


start and/or completion of prior activities. For example, in a modular
housing facility, one crew may be framing the oor, while another crew
frames the walls. Either crew may complete their work rst, but as-
sembly of the building cannot begin until both the oor and the walls
are complete.

4. Shared resources (i.e., resources that have not been exclusively assigned
to a particular activity): In our earthmoving example, the excavator
may be required for activities other than loading trucks, in which case,
trucks will be forced to wait if the excavator is engaged elsewhere.

5. Breakdown and repair of equipment: In our earthmoving example, a


breakdown of the excavator would result in the production rate falling
to zero.

6. Calendar days: If the results of our simulation are to be useful for


scheduling purposes, we cannot simply report the total amount of time
the project requires. We need to be able to tie our simulation time to
actual dates. In addition, work does not generally happen 24 hours a
day, 7 days a week. We need to take into account downtime between
shifts and non-working days such as weekends and holidays.

Taking all of these issues into account would make a mathematical solution
impracticable. However, it is relatively easy for a simulation model to take
all of these factors into account. Let's look at an example of a problem in
which many of these issues turn up.
144 CHAPTER 5. GENERAL PURPOSE MODELLING

5.3.1 A Tunnelling Problem


A sanitary sewer tunnel is being constructed by The City of Edmonton. The
tunnel will be dug by a tunnel boring machine (TBM) and lined with precast
concrete liners. In this example, we will only be concerned with excavation
of the tunnel itself; we will not model the construction of the working and
removal shafts, nor the installation and removal of the TBM.
The tunnel is 1227 meters long and bore-holes have been dug along the
proposed route to determine soil conditions. Two main types of soil were
discovered: a sandy soil that should be relatively easy to dig through and a
heavy clay that will be considerably more dicult. Fortunately, the City has
historical data describing the excavation rate of the TBM in similar soils,
and probability distributions have been tted to this data for use in our
simulation model. The results of this analysis are shown in Table 5.2.
The City plans to use two trains to support the TBM. The length of the
concrete liner segments is 1 meter, so each train is congured with enough
capacity to hold 1 meter of spoil. When traveling from the undercut to the
TBM, a train travels at 4.2 km/h and carries sucient liner segments to
line 1 meter of tunnel. Once a train arrives at the TBM, and assuming the
TBM is not otherwise engaged, excavation of 1 meter of tunnel can begin.
During excavation, the concrete liner segments are unloaded from the train
and placed on the tunnel oor. It takes 15 minutes to unload these liners.
Once excavation and unloading of the liner segments is complete, the train
returns to the undercut. On the return trip, trains travel at 3.6 km/h. When
a train arrives at the undercut it takes 15 minutes and 6 minutes, respectively,
for the crane to unload the spoil and load a new set of liner segments onto
the train. At this point, the train is ready to travel out to the TBM again;
however, only one train is permitted in the tunnel at any time (since there is
insucient space for trains to pass each other), so it must wait for the other
train to return before it can begin its outward journey.
After a train departs, the TBM must complete two activities before ex-
cavation of the next meter of tunnel can begin. First, the liners that were
delivered by the train (and are now sitting on the tunnel oor) must be
installed. The TBM has an articulated hydraulic arm to perform this op-
eration, which takes 24 minutes. Second, the TBM needs to be reset in
preparation for the next excavation cycle. It takes 15 minutes to do this.
In addition to the operations described above, there are two further ac-
tivities that interrupt construction on a regular basis. First, after every 6
5.3. MODELLING PRODUCTION SYSTEMS 145

Table 5.2: Soil Types and Excavation Rates

Chainage (m) Soil Type Excavation Rate (m/h)


0 to 772 Sandy Beta(5.2, 3.7, 1.5, 2.8)
772 to 1036 Heavy Clay Beta(5.9, 4.3, 0.6, 1.1)
1036 to 1227 Sandy Beta(5.2, 3.7, 1.5, 2.8)

Figure 5.11: General Purpose Model of the Train Cycle


146 CHAPTER 5. GENERAL PURPOSE MODELLING

meters of progress, the track and the utility connections to the TBM need
to be extended. This operation takes 4 hours, and no other work can be
performed during this time; i.e., no trains can be in the tunnel and the TBM
must be idle. Second, after every 90 meters of forward progress, surveying
must take place to ensure that the TBM is not o course. It takes 8 hours to
complete the surveying and, as with track extension, no other work can be
done while it is taking place. At points where surveying and track extension
are both required, surveying takes precedence.

Modelling Strategy
The rst step in creating any discrete event simulation model is deciding
what the entities owing through the model will represent. In the tunnelling
operation described above, it is the trains that move from place to place,
so we will dene our entity to represent a train. Next, we need to examine
the process we're modelling and identify the resources. In our case there are
three: the TBM, the crane, and the track. Finally, in order to begin the
modelling process, we need to identify a portion of the problem that is small
enough for us to understand without becoming overwhelmed. The portion
of the tunnelling problem we'll tackle rst is the train cycle.
The train cycle is the journey made by the train. First it travels out to
the TBM, where the liners are unloaded and the muck cars are lled with
spoil. Then, it returns to the undercut, where the spoil is unloaded by the
crane and a new set of liners is loaded. Finally, the train waits to begin its
next trip out to the TBM. Figure 5.11 shows what the train cycle might look
like as a General Purpose model.

5.3.2 Resource Elements


Before we can look at the train cycle model in detail, we need to introduce
four new kinds of elements that allow us to model shared resources. The rst
two elements we'll discuss allow us to dene the resource itself, and to dene
one or more queues in which entities will wait when the resource is occupied
with another entity. We do both of these things with the Resource and File
elements, respectively.

The Resource Element


5.3. MODELLING PRODUCTION SYSTEMS 147

A Resource element is responsible for dening a shared resource.


Although it has an output point, entities do not ow out of it.
Rather, the output point is used to connect the Resource to one
or more File elements. These File elements dene the queues in
which entities may be waiting for the Resource. Each Resource element has
the following properties:

ReportStatistics (input): A boolean value indicating whether or not the


Resource should appear in the Statistics Report that Simphony gener-
ates.

Servers (input): The total number of servers. Each server is considered to


be identical. Generally, an entity requiring a resource will be served by
a single server, thus allowing a Resource with two servers to serve two
entities simultaneously. It is possible, however, for an entity to request
more than one server.

Available (output): The number of servers currently available, i.e., the


number of servers currently idle and available for use by an entity.

InUse (output): The number of servers currently in use, i.e., the number
of servers currently serving an entity. The number of servers available
plus the number of servers in use will equal the total number of servers.

Utilization (statistic): An intrinsic (time-dependent) statistic describing


the utilization of the resource's servers over time.

The File Element


A File element is responsible for dening a queue in which enti-
ties wait for a shared resource. Although it has an input point,
entities do not ow into it. Rather, the input point is used to
connect the File to one or more Resource elements. These Re-
source elements dene the resources that queued entities may be
waiting for. Each File element has the following properties:

IsBlocking (input): A boolean value indicating whether or not entities


further back in the queue can be served prior to the entity at the head
of the queue. To illustrate how this property works, consider a situation
in which a queue contains two entities waiting for a resource with no
148 CHAPTER 5. GENERAL PURPOSE MODELLING

servers currently available. Suppose that the entity at the head of the
queue requires two servers to perform its work and the next entity
requires only one. If a single server of that resource became available,
what should happen? Should the server sit idle waiting for a second
server to become available so that the rst entity can be served, or
should it be assigned to serve the second entity immediately? The
IsBlocking property species how this decision will be made. If the
property is set to True, the former option will be taken, and if set to
False, the latter. The default value is False.

Priority (input): Denes the order in which a Resource will check con-
nected Files for entities. When one (or more) of the servers belonging
to a Resource element becomes available, the Resource element will at-
tempt to nd an entity that can make use of it. It does this by polling
(checking) each File element it is connected to, to see if an entity is
waiting for a server. The order in which File elements are polled is
controlled by their priority. File elements with higher priority will be
checked rst. The default value is 0.

ReportStatistics (input): A boolean value indicating whether or not the


File should appear in the Statistics Report that Simphony generates.

CurrentLength (output): The number of entities in the File at the end


of simulation.

FileLength (statistic): An intrinsic (time-dependent) statistic describing


the length of the File over time.

WaitingTime (statistic): A non-intrinsic (i.e. time-independent) statistic


that describes the amount of time that entities needed to wait for a
server.

In the model under discussion, there are three resource elements: one
to represent the TBM, one to represent the crane, and one to represent the
track. Each of these resources is connected to a File element that represents
the queue in which trains will wait for the corresponding resource. The
number of servers dened at each resource is 1, since there is precisely one
TBM, crane, and track in the system we're modelling. The IsBlocking and
Priority properties of the le elements are left at their default values of False
and 0, respectively.
5.3. MODELLING PRODUCTION SYSTEMS 149

Once our resources and the corresponding les have been dened, we
need a way for an entity to make use of them to perform an activity. The
modelling elements used to achieve this are called Capture and Release.

The Capture Element


The Capture element is responsible for granting the exclusive
use of one or more servers of a Resource to an entity. When an
entity arrives at a Capture element, a check is made to see if the
servers it requires are available. If so, the servers are granted to
the entity and it exits the Capture element without delay; if not,
the entity will be queued in a File element where it will wait for the servers
to become available. Each Capture element has the following properties:

Resource (input): The name of the Resource element from which the en-
tity requires servers. This property is required. Failing to provide a
value will result in an error being issued when you attempt to run the
model.

Servers (input): The number of servers the entity requires. Normally, this
will be left at the default value of 1; however, larger numbers (or a
formula) are permitted. If you set this property to a value greater than
the total number of servers available at the Resource, a warning will
be issued when you attempt to run the model.

File (input): The name of the File element in which the entity should wait
if the requested servers are unavailable. This property is required.
Failing to provide a value will result in an error being issued when you
attempt to run the model.

Priority (input): The priority of the entity if it needs to be queued. When


an entity is queued in a File element, it will be placed behind all entities
with a priority greater than or equal to its own, and ahead of all entities
with a priority less than its own. It is permissible for this value to be
negative or a formula. The default value is 0.

The Release Element


150 CHAPTER 5. GENERAL PURPOSE MODELLING

The Release element allows an entity to return servers it has


previously captured to the pool of available servers. Simphony
is very strict when it comes to releasing servers: if an entity
attempts to release servers it has not previously captured, an
error will be issued and simulation will terminate. Each Release element has
the following properties:

Resource (input): The name of the Resource element to which the entity
will be releasing servers. This property is required. Failing to provide
a value will result in an error being issued when you attempt to run
the model.

Servers (input): The number of servers the entity is releasing. Normally,


this will be left at the default value of 1; however, larger numbers (or
a formula) are permitted. If you set this property to a value greater
than the total number of servers available at the Resource, a warning
will be issued when you attempt to run the model.

5.3.3 The Train Cycle


Having introduced the Resource, File, Capture and Release elements, we can
continue with our analysis of the tunnelling model by looking at the life cycle
of the train entities.
First, the two trains are created by the Create element. This element
is congured to create two entities, the rst at time 0, with an interval
between arrivals of 0. This conguration causes both train entities to exit
the element at time 0, at which time they are routed to a Capture element
and both attempt to capture a server of the track resource. Of course, only
one of these entities will be successful as the track resource only possesses
a single server. The train entity that manages to capture the track will be
routed to the traveling task, while the other will be queued in the waiting
le associated with the track resource, and it will remain there until the rst
entity releases the track.
The Task element that models the travel of the train from the undercut to
the TBM is a simple unconstrained task, but its duration is somewhat more
complicated, because the distance that the train needs to travel increases as
the TBM moves forward. In order to model this situation, we'll need to use
a mathematical formula, which we'll discuss later.
5.3. MODELLING PRODUCTION SYSTEMS 151

Upon arrival at the TBM, the train entity attempts to capture a server of
the TBM resource. Once the TBM has been captured, the excavation process
begins, and when this process is complete the TBM is released and the train
entity begins the return journey. At present, we have modelled operations at
the TBM in a simplistic fashion that does not match the process described
above. We will improve the model later, but for now, we will simply assume
that the process of excavating 1 meter of tunnel and unloading the concrete
liners from the train is unconstrained and takes a total of 30 minutes. Notice
also that we don't need to model the TBM as a shared resource here. We
could have modelled the excavation process as a constrained task with a single
server and achieved the same result. However, we know from the process
description that the TBM has other duties (e.g., lining and resetting) that
we'll be adding to our model in the future, so from this perspective it makes
sense to use a shared resource.
The Task element that models the return journey of the train is much the
same as the one modelling the journey to the TBM. As before, the distance
the train needs to travel increases as the TBM advances. We'll discuss the
mathematical formula required to model this in a moment.
Once the train returns to the undercut, it releases the server of the track
resource it captured earlier. When this happens, the other train (which has
been waiting the entire time) captures that server and begins its journey out
to the TBM. Meanwhile, the rst train attempts to capture the crane and,
once it obtains the crane's services, begins the processes of unloading the
spoil and loading a fresh set of liners. These tasks are both unconstrained
and have durations of 15 and 6 minutes, respectively. Once loading of the
liners is complete, the train releases the crane and attempts to capture the
track so that it can begin its next outbound journey.

Formulas for the Train Cycle


Now we'll return to the two Task elements modelling the travel of the trains to
and from the TBM. As explained above, the durations of these two elements
are dependent on the distance the TBM has excavated. To do this, we need
to solve two problems: rst, we need a way to keep track of the distance that
the TBM has excavated; second, we need a way to express the duration of a
Task element as a mathematical formula.
In the tunnel model shown above, we solve the rst problem by using a
modelling element we've already introduced: the Counter element labelled
152 CHAPTER 5. GENERAL PURPOSE MODELLING

Chainage." This Counter is initially set to zero, and every time a train
completes the excavation activity, it is incremented by one. Thus, its current
count represents the distance the TBM has excavated at any given time
during simulation. Note that we also make use of this element to terminate
the simulation: its Limit property is set to 1227 (the length of the tunnel as
specied in the problem description).
To solve the second problem, we need to write a formula in Visual Basic.
The Duration property of the Task element provides access to Visual Basic
via the builder button in the Property Grid, as shown in Figure 5.12.

Figure 5.12: Simphony Formula Editor

We'll discuss formulas and the Formula Editor in more detail in a later
chapter, but for now, it's enough to understand that in Simphony, a formula
is very similar to a mathematical function: it takes an input and it produces
an output. In the case of the Duration of a Task, the input (named Element )
is the Task element itself, and the output is a numeric value indicating how
long a passing entity should be delayed. To calculate the duration in our
case, we need to take the current value of the counter (which is measured in
meters) and divide by 70 (the speed of the train in meters per minute). The
Visual Basic code to do this is shown below:
Public Partial Class Formulas
Public Shared Function Formula (...) As System . Double
Return Count (" Chainage ") / 70.0
End Function
End Class
5.3. MODELLING PRODUCTION SYSTEMS 153

Let's examine this formula line by line. The rst and last lines dene a class
that will contain not only this formula, but all other formulas used by the
model. These two lines will be present in every formula you write, and should
never be modied. All of your Visual Basic code will be placed inside this
class denition. Next, the second and fourth lines dene the function that
represents our formula. As with the class denition, these two lines will be
present in every formula you write, and should never be modied. Unlike
the class denition, however, they will vary between formulas. In particular,
the return type of the formula can change. The return type of the formula
above is System.Double, which means a numeric (oating-point) value. This
makes sense since the formula is supposed to calculate the duration of the
Task, which is a numeric value. Henceforth, whenever we discuss formulas
in this book, we will omit these four lines.
The most important line for us is the third, the line that performs the
actual calculation. The line begins with the Return statement, which is
a special statement in Visual Basic that indicates that what follows is the
return value of the formula, and that processing of the formula is over. Next,
we use the Count function to get the distance (in meters) the TBM has
excavated so far. The Count function takes as a parameter the name of a
Counter element, and it returns the current count at that element. In our
case, we pass it the text-literal Chainage," which is the name of the Counter
tracking the excavation distance. Finally, we divide by 70 (the speed of the
train in meters per minute) to get the number of minutes required for the
train to travel to the TBM.
The formula for the return trip is similar, but this time we divide by 60,
as the train is somewhat slower on the return trip:
Return Count (" Chainage ") / 60.0

Examining the Results


Now that the model is built, we can run it and examine results. The statistics
report produced by Simphony is shown in Figure 5.13. The most import
information we can glean from this report is that neither the crane nor the
TBM is a bottle neck in the system: the utilization of these resources is
42.8% and 61.3%, respectively and the average amount of time trains spent
waiting for them is 0 in both cases.
154 CHAPTER 5. GENERAL PURPOSE MODELLING

In a model such as ours, the other piece of information in which we would


typically be interested, is a curve showing the overall production (in meters)
with respect to simulation time. In our model, we can obtain such a curve
from the Counter element labelled Chainage." This Counter has a statistical
property called Production and the chart we require can be accessed by
clicking the builder button in the Property Grid as shown in Figure 5.14.
This chart shows what we would expect to see: production tapers o as work
proceeds because the trains have further and further to travel.
Another piece of information that we might like this model to produce is
the cycle time of the trains, i.e., on average, how long does it take a train to
complete a round trip? Unfortunately, at present, our model cannot tell us
this information. In order to produce this, we need to introduce the concept
of custom statistics.

5.3.4 Statistic Elements


There will often be cases when developing models in which the standard
modelling elements provided by the General Template do not provide all of
the information you desire. When this happens, you can make use of cus-
tom statistics. Custom statistics are dened using two modelling elements:
Statistic and StatisticCollect. We'll rst provide details on both these ele-
ments, and then show how they can be used to obtain the train cycle time
in our model.

The Statistic Element


A Statistic element is responsible for dening a custom statis-
tic. The Statistic element is similar to the Resource element in
that it does not participate in entity ow; rather, it simply acts
as a repository of observations. Each Statistic element has the
following properties:

Interpretation (input): A value that gives a general idea about what in-
formation the statistic is tracking. Possible values include Cost, Cycle
Time, Production Rate, and Utilization. The default value is Generic
(i.e., uninterpreted). It is always a good idea to select a suitable in-
terpretation, as this will allow the Statistic element to produce charts
that are tailored to that particular application.
5.3. MODELLING PRODUCTION SYSTEMS 155

Statistics Report
Date: Sunday, January 25, 2015
Project: Model
Scenario: Scenario1
Run: 1 of 1

Non-Intrinsic Statistics
Element Mean Standard Observation Minimum Maximum
Name Value Deviation Count Value Value
Scenario1 (Termination Time) 60,090.864 0.000 1.000 60,090.864 60,090.864

Counters
Element Final Overall Average First Last
Name Count Productivity Interarrival Arrival Arrival
Chainage 1,227.000 0.020 48.989 30.000 60,090.864

Resources
Element Average Standard Maximum Current Current
Name Utilization Deviation Utilization Utilization Capacity
Crane 42.8 % 49.5 % 100.0 % 0.0 % 1.000
TBM 61.3 % 48.7 % 100.0 % 100.0 % 1.000
Track 100.0 % 0.0 % 100.0 % 100.0 % 1.000

Waiting Files
Element Average Standard Maximum Current Average
Name Length Deviation Length Length Wait Time
CraneQ 0.000 0.000 1.000 0.000 0.000
TrackQ 0.572 0.495 1.000 1.000 27.969
TrainQ 0.000 0.000 1.000 0.000 0.000

Figure 5.13: Train Cycle Statistics Report

Figure 5.14: Train Cycle Production vs. Simulation Time


156 CHAPTER 5. GENERAL PURPOSE MODELLING

Intrinsic (input): A boolean value indicating whether or not the Statis-


tic is intrinsic (time-dependent). We discussed the dierence between
intrinsic and non-intrinsic statistics in the section on Hand Simulation.

ReportStatistics (input): A boolean value indicating whether or not the


Statistic should appear in the Statistics Report that Simphony gener-
ates.

ObservedStatistic (statistic): A statistic describing the information ac-


cumulated in the Statistic element.

The StatisticCollect Element


The StatisticCollect element is responsible for adding a single
observation to a Statistic element. Whenever an entity passes
through the element, an observation is added to the Statistic el-
ement it has been associated with. Each StatisticCollect element
has the following properties:

Statistic (input): The name of the Statistic element to which observations


should be added. This property is required. Failing to provide a value
will result in an error being issued when you attempt to run the model.

Value (input): The value of the observation. It is permissible for this to


be a constant value, but normally it will be a mathematical formula.

5.3.5 Collecting Train Cycle Time


Our tunnelling model, with Statistic and StatisticCollect elements added to
collect the cycle time of the trains, is shown in Figure 5.15.
The Statistic element we've added (labelled CycleTime"), is non-intrinsic
and has its interpretation set (appropriately) to CycleTime. The Statistic-
Collect element (labelled CollectCycleTime") will add observations to this
Statistic, but what should these observations be? How do we go about cal-
culating the time it takes for a train to complete one cycle?
In order to calculate the cycle time, we need to know two things: the
simulation time at which the train entity began its cycle, and the simulation
time at which it nished. The cycle time of the train will be the dierence
of these two numbers. In the model above, the train entity begins its cycle
5.3. MODELLING PRODUCTION SYSTEMS 157

Figure 5.15: General Purpose Model of the Train Cycle with Statistic

140
Cycle Time (min)

120

100

80

60

40
0 10,000 20,000 30,000 40,000 50,000 60,000
Simulation Time (min)

Figure 5.16: Train Cycle Time vs. Simulation Time


158 CHAPTER 5. GENERAL PURPOSE MODELLING

when it enters the Capture element labelled CaptureTrack" and it nishes


when it leaves the Release element labelled ReleaseCrane."
One of the features of entities in the General Template, is their ability
to hold information. Each entity can contain three types of information
called attributes : oating-point (numeric) attributes, integral (also numeric)
attributes, and textual attributes. These three types of attributes are stored
in three arrays, associated with every entity, called LX, LN, and LS. The
arrays are zero-based and have no restriction on their length.
Since simulation time in Simphony is a oating-point value, we'll use one
of the oating-point attributes, say LX(0), to store the time an entity begins
its cycle. As mentioned above, an entity begins its cycle when it enters the
Capture element labelled CaptureTrack." In our model there are two ways
this can happen: the entity can either ow from the Create element labelled
CreateTrains," or it can ow from the StatisticCollect element labelled Col-
lectCycleTime." Entities from either modelling element are routed into an
Execute labelled LX(0)=TimeNow" where they get stamped with the time
that they start the current cycle. The time stamp is in the LX(0) attribute
using the following formula:
LX (0) = TimeNow
Return True
This formula simply assigns the current simulation time to the LX(0)
attribute of the entity traversing the Execute element. Once this modication
is done, we can set the Value property of the CollectCycleTime" element to
the following formula:
Return TimeNow - LX (0)
As was mentioned above, a train entity completes its cycle when it leaves
the Release element labelled ReleaseCrane" and enters the CollectCycle-
Time" element. Thus, when the formula above is evaluated, the TimeNow
variable will contain the time at which the entity nished its cycle. From
this, we subtract the LX(0) attribute, which contains the time at which the
entity started its cycle. The result is the cycle time for the entity. The cy-
cle time is returned from the formula and collected as an observation in the
CycleTime" statistic.
After the model is run, the CycleTime" statistic produces the time chart
shown in Figure 5.16. The chart shows us what we would expect to see: the
cycle time of the trains increases as simulation proceeds (i.e., as the length
of the tunnel grows).
5.3. MODELLING PRODUCTION SYSTEMS 159

5.3.6 Generate and Consolidate Elements


Recall that at present, the way we have modelled the TBM is quite simplistic
and does not match the problem description. In order to improve our model,
we rst need to introduce the Generate and Consolidate elements.

The Generate Element


A Generate element creates one or more clones of a passing entity.
The original entity ows out of the upper branch of the element,
while the clones ow out of the lower branch. A Generate element
has only a single property:

Quantity (input): The number of clones to create. The ele-


ment's graphic on the Modelling Surface indicates the value of this
property on the branch the clone(s) will ow out of.

Note that if an entity that has captured resources ows into this element, then
the original entity that ows out of the upper branch will remain associated
with the resources, while the clones that ow out of the lower branch will
not. It is important therefore, not to release the resources from a clone as
this will cause an error during simulation.

The Consolidate Element


A Consolidate element blocks an entity arriving via the upper
branch until one or more entities arrive via the lower branch.
The entity that was blocked ows out of the element as soon as
the required number of entities arrive via the lower branch, while
the other entities are destroyed. Each Consolidate element has
the following properties:

Quantity (input): The number of entities that must arrive via the lower
branch before a blocked entity will be released. The element's graphic
on the Modelling Surface indicates the value of this property on the
lower branch.

ReportStatistics (input): A boolean value indicating whether or not the


Consolidate element should appear in the Statistics Report that Sim-
phony generates.
160 CHAPTER 5. GENERAL PURPOSE MODELLING

FileLength (statistic): An intrinsic (time-dependent) statistic describing


the number of blocked entities over time.

WaitingTime (statistic): A non-intrinsic (i.e. time-independent) statistic


that describes the amount of time that entities arriving via the upper
branch needed to wait before being released.

In general, a Consolidate element will be paired with a Generate element,


with the original entity eventually being routed to the Consolidate element's
upper branch, while the clones are routed to the lower branch.

5.3.7 Improved Modelling of the TBM


We now return to improving the way we model the TBM. At present, we are
missing two key components: rst, the duration of the excavation activity is
set to a constant 30 minutes rather than being based on the soil conditions,
and second, the TBM has other duties (such as lining and resetting) that we
have so far ignored. Let's look at each of these in turn.

Excavation Rate
In the problem description, the rate at which the TBM advances is stochastic
and depends on the type of soil being excavated (which in turn is dependent
on the chainage). The formula we'll use to calculate the duration for the
excavation activity is as follows:
Select Case Count (" Chainage ")
Case Is < 772.0 ' Sandy
Return 60.0 / SampleBeta (5.2 , 3.7 , 1.5 , 2.8)
Case Is < 1036.0 ' Heavy Clay
Return 60.0 / SampleBeta (5.9 , 4.3 , 0.6 , 1.1)
Case Else ' Sandy
Return 60.0 / SampleBeta (5.2 , 3.7 , 1.5 , 2.8)
End Select

This formula utilizes Visual Basic's Select Case statement to break the cur-
rent chainage down into the three ranges specied in the problem statement.
Then, once the soil conditions are known, a random deviate is generated from
the appropriate distribution. This random deviate is expressed in meters per
hour, so some further calculation is required to determine the number of
minutes required to excavate 1 meter (the length of a liner segment); hence,
5.3. MODELLING PRODUCTION SYSTEMS 161

we take 60 min/h and divide it by the random deviate. The result is the
duration of the excavation activity in minutes.

Lining and Resetting


There are actually three activities the TBM must perform to complete lining
and resetting. First, the concrete liners that were brought to the TBM by
the train must be unloaded; next, the liners must be installed on the walls of
the tunnel; and nally, the TBM is reset to begin the next cycle. Recall that
unloading of the liners occurs during the excavation phase of the TBM's cycle
and that the train cannot depart, nor lining of the tunnel begin, until both
the unloading activity and the excavation activity are complete. A revised
model with these additional activities is shown in Figure 5.17.
Let's take a look at the life cycle of a train entity in this version of the
model from the point at which the train captures the TBM. Once the train
entity leaves the Capture element, it is cloned by a Generate element. The
original train entity then proceeds to the excavation activity while the clone
ows to the unloading activity. In this version of the model, the duration of
the excavation activity has been modelled using the formula discussed above.
The unloading activity is unconstrained with a duration of 15 minutes, as
specied in the problem statement. Once excavation is complete, the original
train entity ows through the production counter and then into a Consolidate
element where it waits for its clone to complete its work. Of course, if the
clone arrives rst, it will not have to wait at all.
Once the clone completes the unload activity, it proceeds to another Gen-
erate element. This element routes the original clone to the Consolidate ele-
ment where it will be consolidated with the original train entity. Meanwhile,
a second clone of the train entity is routed to a Capture element labelled
CaptureTBM2" where it requests the TBM resource. This second clone will
be responsible for the remaining activities in the lining and resetting process.
Notice that in this model, there are two File elements associated with the
TBM resource. This allows us to separate the queuing statistics for the trains
waiting for the excavation activity and the cloned entities waiting to begin
lining.
When the original train entity nally leaves the Consolidate element, we
can be assured that both the excavation activity and the unloading activity
are complete. Thus, the original train entity releases the TBM and begins
its return journey. As soon as the original train entity releases the TBM
162 CHAPTER 5. GENERAL PURPOSE MODELLING

Figure 5.17: General Purpose Tunnelling Model


5.3. MODELLING PRODUCTION SYSTEMS 163

resource, it will be assigned to the second clone. Note how this ensures that
the lining activity cannot begin until both the excavation activity and the
unloading activity have been completed. Moreover, the next train entity ar-
riving at the tunnel face will not be able to begin the excavation or unloading
activities until the second clone releases the TBM resource.
Once the second clone captures the TBM resource, it proceeds to the
lining activity and resetting activities, which are both unconstrained with
durations of 24 minutes and 15 minutes, respectively. Having completed
these two activities, it releases the TBM resource and is destroyed.

Re-examining the Results

As we should expect, when the model is run this time, the results are quite
dierent. This time, the statistics report tells us that the utilization of the
crane and the TBM is 28.1% and 100%, respectively, and that the average
waiting time for them is 0 and 19.994 minutes, respectively1 . Clearly, the
more accurate model shows that the TBM is a system bottleneck. Figure 5.18
shows a graph of train waiting time vs. simulation time. The graph shows
that the amount of time trains wait for the TBM falls as simulation proceeds
(which is what we would expect as the amount of time trains spend traveling
increases).
The production curve and train cycle time shown in Figures 5.19 and 5.20,
respectively, are also dierent from our earlier results. The production curve
no longer has any curvature. This is because the process is no longer limited
by the travel time of the trains; the production rate of the TBM has become
the limiting factor. Also, the portion of the curve that has a slightly lower
slope shows the portion of the tunnel in which heavy clay was encountered.
The cycle time for the trains is markedly dierent. We can clearly see
the portion of the tunnel in which progress was slowed due to dicult soil
conditions. Moreover, the cycle time in the two dierent types of soil is fairly
constant. Again, this is because the travel time of the trains is no longer a
limiting factor in our model.

1
Because of the way excavation duration is modelled, our model is now stochastic. This
means that if you develop the model yourself you may obtain slightly dierent results.
164 CHAPTER 5. GENERAL PURPOSE MODELLING

Waiting Time (min)


40

20

0
0 20,000 40,000 60,000 80,000
Simulation Time (min)

Figure 5.18: Time Trains Wait for TBM vs. Simulation Time
Production (m)

1,000

500

0
0 20,000 40,000 60,000 80,000
Simulation Time (min)

Figure 5.19: Tunnel Production vs. Simulation Time


Cycle Time (min)

200

100

0
0 20,000 40,000 60,000 80,000
Simulation Time (min)

Figure 5.20: Train Cycle Time vs. Simulation Time


5.3. MODELLING PRODUCTION SYSTEMS 165

5.3.8 Valve and Branch Elements


In order to add the activities of track extension and surveying to our model,
we will need to introduce three further modelling elements: the Valve, Acti-
vator, and Conditional Branch.

The Valve Element


A Valve element will allow an entity to pass or prevent it from
passing depending on its state: open or closed. The state of a
Valve element can be changed using an Activator element or it
can be congured to shut automatically after a certain number
of entities have passed through. Each Valve element has the
following properties:

AutoClose (input): A numeric value that species the number of entities


thatonce openedthe Valve will permit to pass before it is automati-
cally shut. The default value of zero indicates thatonce openedthe
Valve will remain open regardless of the number of entities passing
through.

InitialState (input): Indicates the state of the Valve at the start of simu-
lation: opened or closed.

ReportStatistics (input): A boolean value indicating whether or not the


Valve element should appear in the Statistics Report that Simphony
generates.

FileLength (statistic): An intrinsic (time-dependent) statistic describing


the number of entities blocked by the Valve over time.

WaitingTime (statistic): A non-intrinsic (i.e. time-independent) statistic


that describes the amount of time that entities need to wait before
passing through the Valve.

The Activator Element


166 CHAPTER 5. GENERAL PURPOSE MODELLING

An entity passing through an Activator element will cause a spec-


ied Valve element to be opened or closed. If the Valve is already
in the specied state when an entity arrives, the entity will leave
the Activator without modifying the state of the Valve. Each
Activator element has the following properties:

Action (input): Species whether the Valve element should be opened or


closed. This property may be a formula. The graphic of an Activator
element on the Modelling Surface indicates the value of this property:
a plus sign indicates that the activator will open the Valve, a minus
sign indicates that it will close the Valve, and a question mark indicates
that this property has been set to a formula.

Valve (input): Species the Valve element whose state is to be changed.


This property is required. Failing to provide a value will result in an
error being issued when you attempt to run the model.

The ConditionalBranch Element


A ConditionalBranch element will route an arriving entity out
of one of two branches depending on a specied condition. It
diers from a Generate element in that the entity will only
exit via one branch and no entities will exit via the other. A
ConditionalBranch element has only one property:

Condition (input): A boolean value specifying whether the entity should


be routed out of the upper or lower branch. This property will almost
always be set to a formula.

5.3.9 Track Extension and Surveying


It is now necessary to add the remaining two processes described in the
problem statement to our model: track extension and surveying. A model
with these activities added is shown in Figure 5.21. The modelling strategy
here is to create two entities that will model the track extension and surveying
activities, respectively.
We'll begin our discussion of the nal model by looking at the life-cycle
of the track extension entity. It is created by the Create element labelled
ExtensionEntity" and proceeds immediately to the Valve element labelled
5.3. MODELLING PRODUCTION SYSTEMS 167

Figure 5.21: Completed General Purpose Tunnelling Model


168 CHAPTER 5. GENERAL PURPOSE MODELLING

BlockExtension." This Valve element has an initial state of Closed" and its
AutoClose property is set to 1. Thus, the track extension entity will remain at
the Valve until it is opened. When this nally happens, the entity proceeds
to a Capture element (causing the Valve to close automatically) where it
requests the track resource with a priority of 1 rather than the default of
0. It does this so that trains will be blocked from entering the tunnel while
track extension is taking place. Once it is granted the track resource, it
proceeds to another Capture element where it requests the TBM resource.
Once it is granted the TBM resource, it proceeds to release the TBM resource
immediately. The reason the entity requests the TBM resource is to ensure
that the TBM is not engaged with lining or resetting while track extension is
taking place; however, because the TBM is not actually required to perform
track extension, it is released immediately and will be idle during the process.
Next, the entity moves to a Task element labelled ExtendTrack" that models
the track extension activity. This is an unconstrained Task with a duration of
240 minutes (4 hours). Once track extension is complete, the track resource
is released (allowing trains to once again enter the tunnel) and the entity
completes its cycle by returning to the Valve labelled BlockExtension" where
it will wait until the Valve is opened again.
The cycle for the survey entity is the same, except this time, the request
for the track resource is made with a priority of 2 to ensure that surveying
takes precedence over track extension. In addition, the duration of the survey
activity is 480 minutes (8 hours).
It remains to discuss how the Valves that block the track extension and
survey entities are opened. Notice that four elements have been inserted into
the model following the Release element labelled ReleaseTBM", two Con-
ditionalBranch elements and two Activator elements. When a train begins
its return journey and enters the rst of these ConditionalBranch elements
(labelled CheckExtension"), a check is made to see if the value of the pro-
duction counter is a multiple of 6. If so, the entity will ow to the Activa-
tor labelled ActivateExtension" causing the BlockExtension" Valve to be
opened, and from there, proceeds to the next ConditionalBranch element;
if the production counter is not a multiple of 6, it simply proceeds directly
to the next ConditionalBranch element skipping the Activator. The formula
used to make this check is:
Return Count (" Chainage ") Mod 6 = 0
5.4. ADDING USER WRITTEN CODE TO MODELS 169

This formula uses the Visual Basic Mod operator to determine if the current
chainage is a multiple of 6. If so, the boolean value True is returned; otherwise
False is returned. The check whether surveying needs to take place works in
the same way, except this time the formula is:
Return Count (" Chainage ") Mod 90 = 0

since surveying is supposed to take place every 90 meters. If surveying should


take place, the entity passes through an Activator to open the Valve labelled
BlockSurvey" before beginning the return activity; otherwise it proceeds
directly to the return activity skipping the activator.

5.4 Adding User Written Code to Models


In the previous section, we introduced user written code (formulas) as a way
to calculate the duration of an activity and to calculate the value to collect
into a statistic. As you might imagine, adding code to our models can be a
very eective modelling technique. With user written code, we can:

ˆ Perform complex calculations;

ˆ Make complex decisions;

ˆ Duplicate the behaviour of existing modelling elements;

ˆ Combine the behaviour of multiple modelling elements in one; and

ˆ Add behaviour beyond that provided by the standard modelling ele-


ments.

The key to all of these features is the Execute element.

5.4.1 The Execute Element


The Execute element runs a snippet of user written code (i.e.,
a formula) whenever an entity arrives at its input point. Each
Execute element has the following properties:
170 CHAPTER 5. GENERAL PURPOSE MODELLING

Expression (input): The user written code to execute. The


return type of this formula is boolean. If the formula re-
turns true, the entity will be passed out of the element's output point
after the formula has been executed; if the formula returns false, it is
the responsibility of the formula to do something with the entity, or it
will be destroyed.

To illustrate the use of the Execute element, we will redevelop the tunnelling
example from the previous sections using Execute elements almost exclu-
sively.
Note that throughout this section we assume some familiarity with the
Visual Basic programming language. For those readers who are unfamiliar
with Visual Basic, and for those who would like to review it, we provide a
short introduction to Visual Basic in Appendix B.

5.4.2 Local vs. Global Attributes


In the previous section, we discussed the attribute arrays LX, LN, and LS
associated with an entity, and we used the LX(0) attribute to record the
time at which a train entity began its cycle. These three attribute arrays are
called the local attributes of the entity (hence the L prex). They're called
local attributes because each entity has its own individual set of attributes.
In other words if, for example, the LX(0) attribute of one entity is modied,
the LX(0) attributes of other entities are unaected.
In addition to the local attribute arrays, Simphony also provides a set
of global attribute arrays: GX, GN, and GS. By global, we mean that there
is only one set of these attributes that is shared by all entities. Thus, if,
for example, the GX(0) attribute is modied by one entity, the change is
immediately visible to all other entities.
For the purposes of our tunnelling model, we will make use of the GX(0)
global attribute to replace the Counter element that tracks the current
chainage of the tunnel. Recall that this particular Counter element does
three important things in our model:

1. It increments the chainage by 1 meter each time an entity passes


through;

2. It provides a statistic that tracks production over time (see for example,
Figures 5.14 and 5.19 above), and
5.4. ADDING USER WRITTEN CODE TO MODELS 171

3. It terminates the simulation once the 1227 meter mark is reached.

The Execute element with which we replace the Counter will need to perform
all three.

Figure 5.22: Replacing the Counter Element with an Execute Element

Now, an Execute element does not host any statistics, so in order to


accomplish item 2, we will need to add another Statistic element (in addition
to the one for cycle time). We'll name this element Production" and its
interpretation property will be set to OverallProduction. Once this Statistic
has been added, we can replace the Counter with an Execute element, as
shown in Figure 5.22. The Expression property of the Execute element is set
to the following formula:
GX (0) = GX (0) + 1
CollectStatistic (" Production " , GX (0))
If GX (0) = 1227 Then
HaltRun ()
End If
Return True
The rst line of this formula increments the GX(0) property by 1, thus, ac-
complishing item 1 above. Next, the CollectStatistic method is used to col-
lect the new value of GX(0) into the Statistic element labelled Production,"
which accomplishes item 2. The CollectStatistic method serves a similar pur-
pose to a StatisticCollect element; it takes two arguments: the name of the
Statistic element into which an observation should be collected, and the value
to collect. Next, a check is made to see if the value of GX(0) has reached
1227, and if so, the HaltRun method is called, causing the current run to
terminate, thus, accomplishing item 3. Finally, the value True is returned,
which causes the Execute element to route the entity out of its output point.
If you were to modify the model as just described, and then attempt
to execute it, you would receive an error message from the Task element
labelled Travel" stating that A Counter with the name `Chainage' does not
exist." This happens because the formula that calculates the duration of the
172 CHAPTER 5. GENERAL PURPOSE MODELLING

travel activity is dependent upon the counter we've just removed. In fact,
there are ve elements containing formulas that are dependent upon that
counter: the travel, excavate and return activities, together with the two
ConditionalBranch elements. In order to make our model run properly, we
need to modify the formulas of all ve elements so that they reference the
GX(0) global property instead of the counter. For example, the formula for
the duration of the travel activity needs to be changed from:
Return Count (" Chainage ") / 70.0

to:
Return GX (0) / 70.0

Once the formulas are updated, our model will run and produce the same
results as it did previously.

5.4.3 Collecting Statistics


We introduced the CollectStatistic method above so that we could replace the
functionality of the Production statistic provided by a Counter element. We
can use this same method within an Execute element to replace the Statistic-
Collect element labelled CollectCycleTime," as shown in Figure 5.23. The
formula we will need for the element is:
CollectStatistic (" CycleTime " , TimeNow - LX (0))
LX (0) = TimeNow
Return True

The rst line of this formula collects an observation to the CycleTime"


statistic using the CollectStatistic method, thus, duplicating the functionality
of the original StatisticCollect element. The value collected is the current
simulation time (the time at which the train entity nished its cycle) minus
the value of the entity's LX(0) attribute (the time at which the entity began
its cycle), which is precisely what the StatisticCollect element was collecting
in our original model. Once the observation has been collected, the LX(0)
attribute of the entity is set to the current simulation time, in preparation
for the next cycle. By doing this here, we do not need to worry about doing
it in the relationship, leaving the Execute element as we did in the original
model. Finally, the value True is returned, which causes the Execute element
to route the entity out of its output point.
5.4. ADDING USER WRITTEN CODE TO MODELS 173

Figure 5.23: Replacing the StatisticCollect Element with an Execute Element

5.4.4 Opening and Closing Valves


The next step in conversion of our tunnelling model is to replace the Acti-
vator and ConditionalBranch elements. Using a single Execute element, we
will be able to replace the Activator elements labelled ActivateExtension"
and ActivateSurvey," together with the ConditionalBranch elements labelled
CheckExtension" and CheckSurvey," as shown in Figure 5.24. This is an
excellent example of an instance in which an Execute element can simplify a
model considerably. The formula for the Execute element is as follows:
If GX (0) Mod 6 = 0 Then
OpenValve (" BlockExtension ")
End If

If GX (0) Mod 90 = 0 Then


OpenValve (" BlockSurvey ")
End If

Return True

The General template's ConditionalBranch element is essentially an if"


statement, so it shouldn't come as a surprise to see that it is replaced with
if statements in the code. In this formula, the rst if statement checks to see
if the current length of the tunnel (stored in the global attribute GX(0)) is
a multiple of 6. If so, it opens the Valve element labelled BlockExtension"
using the OpenValve method. This method takes a single argument: the
name of the Valve to open. As you might expect, there is a corresponding
method named CloseValve that also takes the name of the Valve as its only
argument. Next, the formula checks if the current length of the tunnel is a
multiple of 90, and if so, it opens the Valve element labelled BlockSurvey"
using the OpenValve method. Finally, the value True is returned, which
causes the Execute element to route the entity out of its output point.
174 CHAPTER 5. GENERAL PURPOSE MODELLING

Figure 5.24: Replacing Activators/Branches with a Single Execute Element

5.4.5 Scheduling Events


We now come to the job of converting all the Task elements in the model to
Execute elements. In order to implement a task or activity, we know from
our hand simulation exercise that we will be required to schedule an event
with the simulation engine. To schedule an event, we said that we needed
to inform the simulation engine of three things: the entity associated with
the event, what event is going to occur, and the simulation time at which
the event will occur. In an Execute element, the method used to schedule an
event is named ScheduleEvent, and as you would expect, it takes three (or
possibly two as the rst is optional) arguments:
1. the entity associated with the event (if omitted, the entity passing
though the Execute element is assumed),

2. the connection point to which the entity should be routed when the
event is processed, and

3. the amount of time that needs to pass before the event should be pro-
cessed.
5.4. ADDING USER WRITTEN CODE TO MODELS 175

As an example, here is the formula for the Execute element that replaces the
travel activity of our model:
ScheduleEvent ( Element . OutputPoint , GX (0) / 70.0)
Return False

The rst line of this formula is the call to the ScheduleEvent method. In
this example, the method only has two arguments, so it is assumed that the
entity being scheduled is the entity owing through the Execute element.
The rst of the two arguments species the connection point to which the
entity should be routed when the event is processed. In this case, it is the
output point of the modelling element associated with the formula (i.e., the
output point of the Execute element). This means that when the event is
processed, the entity will ow out of the Execute element. The second of
the two arguments is the amount of time that needs to pass before the event
should be processed (i.e., the duration of the activity). This is the same
calculation as was used for the duration of the Task element. Finally, notice
that the value False is returned. This is important, because we don't want
the Execute element to allow the entity to pass on after the formula has
been executed; rather, we want the entity to be delayed until the event is
processed.
All of the other Task elements are converted to Execute elements in the
same way. The only real exception is the excavation activity. Its formula
looks like this:
Select Case GX (0)

Case Is < 772.0 ' Sandy


Dim Duration = 60.0 / SampleBeta (5.2 , 3.7 , 1.5 , 2.8)
ScheduleEvent ( Element . OutputPoint , Duration )

Case Is < 1036.0 ' Heavy Clay


Dim Duration = 60.0 / SampleBeta (5.9 , 4.3 , 0.6 , 1.1)
ScheduleEvent ( Element . OutputPoint , Duration )

Case Else ' Sandy


Dim Duration = 60.0 / SampleBeta (5.2 , 3.7 , 1.5 , 2.8)
ScheduleEvent ( Element . OutputPoint , Duration )

End Select

Return False
176 CHAPTER 5. GENERAL PURPOSE MODELLING

5.4.6 Capturing and Releasing Resources


The nal step in the conversion of our model is to deal with the Capture
and Release elements. As you might imagine, there are methods that can be
used to duplicate the behaviour of these two elements.
The RequestResource method performs the same function as the Capture
element. It takes 6 arguments (though the rst and last arguments are
optional):

1. the entity making the request (if omitted, the entity passing though
the Execute element is assumed),

2. the name of the resource being requested,

3. the number of servers desired,

4. the connection point to which the entity should be routed when the
requested servers are granted,

5. the name of the le in which the entity will wait, and

6. the priority of the request (if omitted, a value of zero is assumed).

To illustrate the use of this method, here is the formula for the Execute
element that replaces the Capture element labelled CaptureTrack" in our
model:
RequestResource (" Track " , 1 , Element . OutputPoint , " TrackQ ")
Return False

The rst line in the formula is the call to RequestResource. The rst argu-
ment of the method is omitted, so it will be assumed that the entity passing
through the Execute element will be making the request. The second argu-
ment species the Resource element named Track" as being requested. The
third species that only one server is required. The fourth argument species
the connection point to which the entity should be routed when the track
resource is granted. In this case it is the output point of the Execute element,
so when the resource is captured, the entity will ow out of the Execute el-
ement. The fth argument species the File element TrackQ" as the place
in which the entity will wait if the resource is unavailable. The nal (sixth)
argument is omitted, so the request is assumed to have a priority of zero.
5.4. ADDING USER WRITTEN CODE TO MODELS 177

Figure 5.25: Tunnelling Model with Execute Elements


178 CHAPTER 5. GENERAL PURPOSE MODELLING

After the call to RequestResource, a value of False is returned, indicating


that the entity should not ow out of the Execute element.
All of the other Capture elements are converted in a similar manner. Note
that the elements capturing the track for track extension and surveying need
to specify priorities of 1 and 2, respectively.
The ReleaseResource method performs the same function as the Release
element. It takes 3 arguments (though the rst argument is optional):

1. the entity releasing the resource (if omitted, the entity passing though
the Execute element is assumed),

2. the name of the resource being released, and

3. the number of servers being released.

To illustrate the use of this method, here is the formula for the Execute
element that replaces the Release element labelled ReleaseTrack" in our
model:
ReleaseResource (" Track " , 1)
Return True

The rst line in this formula is the call to ReleaseResource. The rst argu-
ment of the method is omitted, so it will be assumed that the entity passing
through the Execute element will be releasing the resource. The second argu-
ment species the Resource element named Track" as being released, while
the third species that one server is being released. Finally, the value True
(rather than the value False) is returned, which causes the Execute element
to route the entity out of its output point.
All of the other Release elements are converted in a similar manner.
This completes our conversion of the elements in our model to Execute ele-
ments. The remaining elements would be considerably more dicult (though
not impossible!) to convert, so we will not include them here. The converted
model is shown in Figure 5.25.

5.4.7 Other Methods


There are several further methods that can be used in formulas that we have
not yet covered as they were not required to convert our model. We will
discuss these methods in this section.
5.4. ADDING USER WRITTEN CODE TO MODELS 179

The Count Method


The rst method we will discuss is one familiar from our original tunnel
model: the Count method. This method takes the name of a Counter element
as its only parameter, and it returns the current count. In our original tunnel
model, we used a Counter named Chainage" to track the progress of the
TBM. We then used the Count method to calculate the travel time of trains
from the undercut to the TBM:
Return Count (" Chainage ") / 70.0

Resource Querying Methods


Both the ServersAvailable and ServersInUse methods take the name of a
Resource element (or of a constrained .Task element) as their only argument
and return the number of busy and idle servers, respectively. You can use
these methods to make decisions prior to requesting or releasing a resource:
If ServersAvailable (" Loader ") > 0 Then
' There is an idle loader , so capture it .
RequestResource (" Loader " , 1 , Element . OutputPoint , _
" LoaderQ ")
Else
' Otherwise do something else ...
End If
The QueueLength method takes, as its only argument, the name of a File,
constrained Task, Valve, Batch, or Consolidate element and returns the cur-
rent number of entities waiting at that element. Again, this can be useful in
making decisions prior to requesting or releasing a resource:
If QueueLength (" LoaderQ ") > 0 Then
' Another entity needs a loader , so release it .
ReleaseResource (" Loader " , 1)
Else
' Otherwise do something else ...
End If

Sampling Probability Distributions


We've already made use of the SampleBeta method to determine the exca-
vation rate of the TBM in our tunnelling model. Simphony provides several
more, as summarized in Table 5.3. Each method takes as its arguments the
180 CHAPTER 5. GENERAL PURPOSE MODELLING

parameters of the appropriate probability distribution, and returns a random


variate sampled from the distribution.

Table 5.3: Distribution Sampling Methods

Method Arguments
SampleBeta Shape1, Shape2, Low, High
SampleExponential Mean
SampleGamma Shape, Scale
SampleLogNormal Location, Shape
SampleNormal Mean, Standard Deviation
SampleTriangular Low, High, Mode
SampleUniform Low, High
Chapter 6
Continuous Simulation
In this chapter we discuss combined models whereby continuous variables
that change continuously over time are required to be part of a discrete
event model such as the ones we have discussed so far in this book.
The need for such combined models often arises

when we need to model physical quantities such as uids or tem-


peratures that are governed by physical laws and these laws are
expressed as dierential equations of state. The combined sim-
ulation modelling program commonly integrates the dierential
equations numerically, in step with its computations that describe
the evolution of the discrete events

(Klingener, 1996).
Imagine a model where we are modelling an excavation process but where
dewatering is required, for example. The excavation is a discrete event model
while dewatering is continuous. Likewise, if we are modelling a tunnelling
process in which the TBM advancement was continuous through a segment of
ground as opposed to a discrete time interval for a given meter of length. The
rest of the model is continuous while the TBM advance through the ground
is a function of the penetration rate of the TBM, and therefore the length
excavated is determined from a rate function rather than given as a xed
variable. There are numerous other examples, including those where ow of
water or other material (e.g., agilia concrete, asphalt, etc.) or conveyor belts
combine with the typical models we have seen so far.

181
182 CHAPTER 6. CONTINUOUS SIMULATION

Since we have already learned about modelling in discrete events, this


chapter will start with discussions of how we model continuous change in a
system.

6.1 Dierential Equations


Continuous models are generally described in the form of state variables
and their rates of change. The latter are often represented in the form of
dierential equations. A dierential equation states how a rate of change
(a dierential) in one variable is related to another variable (Newmann,
2010). Also recall from calculus that a dierential equation is any equation
that contains derivatives, either ordinary derivatives or partial derivatives.
To solve a dierential equation on an interval is to nd a function that satises
the dierential equation in question on the same interval.
For example, the rate of change of water level in a tank is often represented
as:
dh k
h0 = = −√ ,
dt h
where k is a constant and h is the water level in the tank. The dierential
equation simply states that the rate at which the water level h0 changes over
time, is a function of the level of water in the tank at that time t. Intuitively,
this makes sense as we expect that the rate will drop as the level in the water
drops due to lesser pressure. To solve the dierential equation, we need to
nd the function h (the integral of h0 ) such that when h is dierentiated, it
yields the right hand side of the equation.
On the computer, we generally solve such equations numerically using,
for example, the Runge-Kutta method.

6.1.1 A Motivating Example


Let's start by overviewing a simple problem from physics to refresh our mem-
ory of dierential equations. The problem is calculating how long it will take
to deplete a tank of water given that we know the starting level of water, the
volume of the tank and the opening (orice) in the tank.
6.1. DIFFERENTIAL EQUATIONS 183

Conservation of Energy
In this system, like any other natural system, the law of conservation of
energy holds true; i.e., potential energy in the system is transformed into
kinetic energy as time advances:

Potential Energy = Kinetic Energy. (6.1)

If we designate the mass of water as m, the height of the water as h, gravity


as g , and velocity as v we can rewrite Equation 6.1 as follows:

1 2
mv = mgh. (6.2)
2
By canceling m from both sides of Equation 6.2, and solving for v we obtain:
p
v = 2gh. (6.3)

Continuity of Flow
If we conceptualize the water tank as a large diameter pipe in a vertical
position (i.e., ow is vertical), then the law that governs physical ow of
uids holds true, i.e., continuity of ow. This law can be applied to the
transition of ow from a big diameter pipe (the tank) to a small diameter
pipe (the outow pipe):
A1 v1 = A2 v2 , (6.4)
where A1 is the cross-sectional area of the tank, A2 is the cross-sectional area
of the outow pipe, v1 is the draw down velocity of the water in the tank,
and v2 is the velocity at which water ows through the outow pipe.
The velocity at which the water in the tank is drawn down can be equated
to the rate at which the height of water in the tank changes. This can be
expressed as:
dh
v1 = − . (6.5)
dt
Now the parameter v in Equation 6.3 is the same as parameter v2 in Equa-
tion 6.4, so substituting for v1 and v2 in Equation 6.4 from Equations 6.3
and 6.5, we obtain:
dh p
− A1 = A2 2gh, (6.6)
dt
184 CHAPTER 6. CONTINUOUS SIMULATION

and solving for dh/dt gives:

dh A2 p
=− 2gh. (6.7)
dt A1

The constants on the right hand side of Equation 6.7 (A1 , A2 , 2 and g ) can
be separated from the variable (h) to obtain:

dh √
= −k h. (6.8)
dt
For illustration, if the cross-sectional area of the tank is 1 m2 and that of
the orice is 5 cm2 (0.0005 m2 ), and g (the acceleration due to gravity) is
9.81 m/s2 , then we have:

dh 0.0005 √ √
=− 2 × 9.81 × h ≈ −0.0022147 h. (6.9)
dt 1
Equation 6.9 models the rate at which the height of the water in the tank
will drop over time.

6.1.2 Modelling the Problem Analytically


Suppose, in the example described above, that we wanted to determine the
time t at which the tank reaches a level of 0.1 m (i.e., the value of t at which
h = 0.1). To do this, we rst need to determine the function h that satises
Equation 6.9. We begin by rearranging Equation 6.8 to get:

1 dh
√ = −k. (6.10)
h dt
Next, we integrate both sides with respect to t:
Z Z
1 dh
√ dt = − k dt. (6.11)
h dt
The left-hand side of this equation can be simplied via the theorem of
Integration by Substitution to give:
Z Z
dh
√ = − k dt. (6.12)
h
6.1. DIFFERENTIAL EQUATIONS 185

And integrating both sides results in:



2 h = −kt + c, (6.13)

where c is a constant that will be determined shortly based on the initial


conditions of the problem. We can now solve for h to get:
 2
k c
h= − t+ . (6.14)
2 2

We leave it as an exercise for the reader to verify that this solution for h
satises Equation
√ 6.8. Finally, we know that h = 10 when t = 0, so we must
have c = 2 10, and previously (in Equation 6.9), we had determined that
k ≈ 0.0022147, so:
√ !2
0.0022147 2 10
h≈ − t+ ≈ (−0.0011074t + 3.1623)2 . (6.15)
2 2

It's now a simple matter to determine that h = 0.1 when t ≈ 2570.0.

6.1.3 Modelling the Problem in Simphony


In Simphony, continuous models are created using a stock and ow paradigm.
Stocks represent the state variables, and ows describe the rate at which the
various state variables are changing. The complete set of modelling elements
that are used for continuous modelling are shown in Figure 6.1.

Figure 6.1: Continuous Modelling Elements


186 CHAPTER 6. CONTINUOUS SIMULATION

The problem of water draining from a tank described above is then re-
duced to the state variable being represented by a Stock element and the
rate shown in Equation 6.9 embedded in the Flow element. Simphony will
then model the value as time progresses using the Runge-Kutta method (nu-
merically solving Equation 6.9).

6.2 Continuous Modelling Elements


Let us now properly dene all modelling elements and then come back and
solve the water tank problem. The modelling elements that are used for
continuous modelling are as follows.

6.2.1 Stock
A Stock element represents a state variable in your model. You can have
as many Stock elements as you wish. Both the input and output point of a
Stock element may only be connected to a Flow element. Each Stock element
has the following properties:
InitialValue (input): The value of the Stock element at the time simula-
tion commences.
Value (output): The current value of the Stock. This value is maintained
by the simulation environment and changes continuously during sim-
ulation. It may also be changed directly by another element in the
modelan Execute element, for example.
In addition to these properties, a Stock element also has a statistic named:
Statistic: An intrinsic (time-dependant) statistic that tracks the value of
the state variable over time.

6.2.2 Source
A Source element represents a source of ow from outside your model. It is
assumed that the capacity of the source is unlimited. A Source element may
only be connected to a Flow element and has no inputs and no statistics. If
you're interested in keeping track of the amount of ow that is coming into
your model, you should use a Stock element with an unconnected input point
rather than this element.
6.2. CONTINUOUS MODELLING ELEMENTS 187

6.2.3 Sink
A Sink element represents a destination for ow to somewhere outside your
model. It is assumed that the capacity of the sink is unlimited. A Sink
element may only be connected to a Flow element and has no inputs and no
statistics. If you're interested in keeping track of the amount of ow that
is leaving your model, you should use a Stock element with an unconnected
output point rather than this element.

6.2.4 Flow
A Flow element represents a rate of ow into or out of a stock. The input
point of a Flow element can only be connected to a Stock or a Source and
the output point may only be connected to a Stock or a Sink. Each Flow
element has the following properties:

Rate (input): The instantaneous rate of ow. It is permissible for this to


be a constant, but in most cases this will be user written code. The
rate at which a Stock is increasing is the sum of all Flows that are
connected to the Stock's input point, and the rate at which a Stock
is decreasing is the sum of all Flows that are connected to the Stock's
output point.

In addition to these properties, a Flow element also has a statistic named:

Statistic: An intrinsic (time-dependant) statistic that tracks the rate of ow


over time.

6.2.5 Watch
A Watch element is responsible for observing a Stock element and generating
an entity if and when a state event occurs. Unlike the previous elements
that have been discussed, the Watch element is intended to be part of your
discrete event model. It is the element that permits communication from the
continuous part of a model to the discrete part. Each Watch element has the
following properties:

Stock (input): The name of the Stock element the Watch element is to
observe.
188 CHAPTER 6. CONTINUOUS SIMULATION

Threshold (input): The value at which a state event is to occur. The


Watch element will generate an entity when the value of the Stock
element it is observing crosses this value.

Tolerance (input): The tolerance within which a crossing of the threshold


value is to be detected. Simphony's continuous simulation algorithm
will ensure that a time step so large as to cause this threshold to be
violated is never taken.

Direction (input): The direction of crossing that is considered to be a state


event, and thus, causing an entity to be generated. The value of this
property can be Positive (the value of the Stock element transitions
from below the threshold value to above the threshold value), Negative
(the value of the Stock element transitions from above the threshold
value to below the threshold value), or Both (in which case a transition
in either direction will be considered a state event).

The relationships between these properties are illustrated in Figure 6.2.

6.2.6 Runge-Kutta-Fehlberg Integration


Simphony uses the Runge-Kutta-Fehlberg (RKF45) algorithm to perform
numeric integration. At the scenario level, there are three properties that
control how the integration is performed. These are:

AbsoluteError The absolute error tolerance permitted during numerical


integration. The default value is 1 × 10−5 .

RelativeError The relative error tolerance permitted during numerical in-


tegration. The default value is 1 × 10−5 .

TimeStep The maximum amount of time the continuous simulation can


advance before a discrete event must take place. This value controls
the interval at which the statistic of all Stock and Flow elements is
updated. A smaller value will result in more accuracy at the expense
of more memory and/or CPU resources. A larger value will result in
less accuracy, but will use less memory and/or CPU resources. The
default value is 1.
6.2. CONTINUOUS MODELLING ELEMENTS 189

Stock
Value
Negative state
transition

Positive state Positive state


transition transition

Tolerance
Threshold
Tolerance

Time

An entity will be generated


somewhere within these times

Figure 6.2: State Transitions

Figure 6.3: Continuous Model for Water Tank Problem


190 CHAPTER 6. CONTINUOUS SIMULATION

It is recommended that the AbsoluteError and RelativeError properties be


left at their default values. Modifying them requires an understanding of the
details of the RKF45 algorithm, which is beyond the scope of this text. For
more information see Hairer, Nørsett, and Wanner (1993).

6.3 Example: Water Draining from a Tank


To model the tank depletion problem we previously described using the Con-
tinuous modelling elements in the Simphony simulation system, we need a
Stock element to represent the level of water in the tank (the state variable),
a Flow element that models the rate of ow from the stock (the deferential
equation in Equation 6.9), and a Watch element whose function is to simply
keep track of the value of the state variable and to trigger the creation of a
simulation entity in Simphony when the state variable crosses a threshold we
specify.
In our example we start with the tank level being 10 m high at the
beginning of the simulation. The cross sectional area of the tank is 1 m2
and its orice is 5 cm2 , so the value of the constant k is the same as that
calculated in Equation 6.9. In our example we want the simulation to stop
when the tank reaches 0.1 m in level (i.e., 9.9 m depleted) at which point we
terminate the simulation.
The model is simple to construct and is shown in Figure 6.3. The Stock
variable Height is congured with an initial value of 10 and is connected to
a Flow element named Outow. The rate in the Flow element is set as per
Equation 6.9 using the following code:
Dim Height = GetStockValue (" Height ")
Return 0.0022147 * System . Math . Sqrt ( Height )
A Watch modelling element is included in the simulation to keep an eye on
the water level so that it can trigger an event that results in the termination
of the simulation. We would like the simulation to terminate when the water
level gets as close as possible to zero without fully depleting the tank. A
threshold value of 0.1 m with a tolerance of 0.1 m can be used to achieve
this. The direction property for the Watch element is set to negative because
the water level will be approaching the threshold from above.
This setup achieves two things. First it prevents the water level values for
the stock crossing over to negative values. Second, it prevents the simulation
from going forever without termination. When the 0.1 m value is crossed,
6.4. EXAMPLE: CHEMICAL TANKS 191

10
Water Level (m)
8

0
0 500 1,000 1,500 2,000 2,500
Simulation Time (seconds)

Figure 6.4: Water Level vs. Simulation Time

an entity is red which is routed into a Counter element that terminates


simulation because its limit property is set to 1.
A plot showing the change in water level vs. simulation time (obtained
from the Statistic property of the Stock element) is presented in Figure 6.4.
The model also shows that it took 2570.4 seconds to empty the water tank
(you can check the property of the counter when simulation time ended, for
example). This agrees quite closely with the analytic solution.

6.4 Example: Chemical Tanks


Two 100 gallon tanks are part of a process in a chemical plant1 . At the
start of the process, the rst tank contains pure water and the second tank
contains water mixed with 150 pounds of treatment chemical. When the
process begins, the contents of the rst tank ow through a pipe into the
second tank at a rate of 2 gallons per minute and the contents of the second
tank ow back to the rst through another pipe at an identical rate. The
process is illustrated in Figure 6.5.
The objective of this example is to determine how long it will take for the
concentration of chemical in the two tanks to equalize. For our purposes, we
1
This example is taken from Kreyszig (2011).
192 CHAPTER 6. CONTINUOUS SIMULATION

Figure 6.5: Chemical Tanks

will consider the concentration to have equalized when the rst tank contains
74.99 pounds of chemical.

6.4.1 Analytical Solution


Before demonstrating how to solve this problem using a Simphony model,
we provide a purely analytic solution. Let y1 (t) and y2 (t) be functions rep-
resenting the amount of chemical in the rst and second tanks respectively
at a given time t. Our initial conditions are:

y1 (0) = 0 and y2 (0) = 150. (6.16)

From the ows specied between the two tanks we obtain the equations:

1 1
y10 (t) = y2 (t) − y1 (t), (6.17)
50 50
and
1 1
y20 (t) = y1 (t) − y2 (t). (6.18)
50 50
We also know that the total amount of chemical in the system is constant,
so we must have:
y2 (t) = 150 − y1 (t). (6.19)
Substituting Equation 6.19 into Equation 6.17 gives:

1
y10 (t) + y1 (t) − 3 = 0, (6.20)
25
6.4. EXAMPLE: CHEMICAL TANKS 193

which is a rst order linear dierential equation with constant coecients.


Its solution is:
y1 (t) = 75 + ce−t/25 , (6.21)

where c is a constant determined by the initial conditions. From the initial


conditions in Equation 6.16 we can calculate that c = −75, so:

y1 (t) = 75(1 − e−t/25 ). (6.22)

It's now a simple matter to determine that y1(t) = 74.99, when t ≈ 223.07
minutes.

6.4.2 Modelled Solution


The General Template model for this problem is shown in Figure 6.6.

Figure 6.6: Chemical Tanks Model


194 CHAPTER 6. CONTINUOUS SIMULATION

The two Stock elements in this model are labelled Tank #1 and Tank
#2 and the value of each represents the amount of chemical in the corre-
sponding tank. The InitialValue property of Tank #1 is set to 0, and the
InitialValue of Tank #2 is set to 150. The RecordingInterval property of
both is left at the default value of 1.
The single Watch element is responsible for detecting when the concen-
tration of chemical in the rst tank reaches 74.99 pounds. Its Stock property
is set to Tank #1 and its Threshold property is set to 74.99. Since the
concentration in the rst tank is expected to rise during simulation, the Di-
rection property of the Watch is set to Positive. Finally, as we do not expect
the rst tank to ever reach a concentration of 75 pounds (except in the limit),
the tolerance is set to 0.01.
When the Watch element generates an entity, it is routed to a Counter
element. This Counter has its Limit property set to 1thus, causing simu-
lation to terminate as soon as the Watch element generates an entity.
The Rate property of the Flow element labelled Flow #1 is set to the
following code:
Return GetStockValue (" Tank #1") / 50
The rate of chemical ow out of the rst tank is, of course, dependent on
the concentration in the tank, so this code looks up the current value of the
Tank #1 Stock element. The rate code for the Flow element labelled Flow
#2 is similar:
Return GetStockValue (" Tank #2") / 50
When executed, we can see that the model terminates after 224 minutes
by examining the Time property of the Counter element. This agrees well
with the value calculated analytically. To view a graph showing the change
in chemical concentration in the rst tank over time, right-click on the Stock
element labelled Tank #1, and select the View Statistic menu item. Switch
to the Time Chart tab to see the graph. A similar graph can be viewed from
the Stock element labelled Tank #2. In this case, it can be seen that the
concentration in the tank decreases over time.

6.5 Example: Sanitary Sewer Handling


When land is developed for housing purposes, the developer is often required
to complete all infrastructure services within the neighborhood being devel-
6.5. EXAMPLE: SANITARY SEWER HANDLING 195

oped. The municipality often inherits this infrastructure. In addition, the


municipality is often required to upgrade infrastructure that connects the
new neighborhood to the municipality. For example, a new intersection near
the entrance to the neighborhood may now be required to facilitate trac
movement, widening of roads leading to the area may be required or road in-
terchanges me be needed. Quite often, sanitary and storm sewers need to be
upgraded to accommodate the added ow into the system. The municipally
and the developer normally agree on a timetable to upgrade or connect the
new services.
Once the developer nishes building the local infrastructure and subdi-
viding the land into lots, he/she will sell them to home builders who start
building and selling houses. In general, a development that is composed of
hundreds of houses takes time to fully develop and therefore early sewer ser-
vicing can be achieved through a variety of ways. For example, the sewage
can be stored in large transfer tanks, and pumped into an existing sewer
nearby through a small pump station and small sewer line. As the develop-
ment catches up, this tank storage option will become insucient due to the
capacity of the tanks or the receiving nearby sewers. That is normally why
the municipality will generally build a new trunk line to connect the new
development with one of the major trunks.
Our example is a development consisting of 198 houses. The rate by which
homes are built, sold, and occupied is approximately 5 houses per month for
the rst 6 months, then 10 houses per month for the following 6 months, then
12 houses per month for the following 6 months, and nally the balance at
the rate of 6 houses per month for the last 6 month period. For the sake of
our example, we will assume that each month is exactly 30 days long; thus
we have the following projections:

ˆ Over the rst 180 days, 30 houses are occupied (1 every 6 days with
the rst on day 6).

ˆ Over the next 180 days, 60 houses are occupied (1 every 3 days with
the rst on day 183).

ˆ Over the next 180 days, 72 houses are occupied (1 every 2 21 days with
the rst on day 362 12 ).

ˆ Over the nal 180 days, the remaining 36 houses are occupied (1 every
5 days with the rst on day 545).
196 CHAPTER 6. CONTINUOUS SIMULATION

The amount of sewage produced by each household is known to be ap-


proximately 1.2 m3 /day. The developer has built one temporary transfer
tank with a capacity of 60 m3 . It can be lled to 95% capacity before a shut-
o valve is activated at which point sewers overow to an open holding pond;
needless to say, this is not a desirable outcome and is permitted only in an
emergency. The tank is emptied by four sewage hauling trucks that operate
around the clock. The trucks have a capacity of approximately 9 m3 , and
are loaded at the transfer tank at a rate of 0.9 m3 /min. They also require
about 5 minutes of maneuvering time to get into and out of the loading area;
during which time the area is unavailable to other trucks.
The trip from the new development to the sewage treatment plant takes
between 50 and 80 minutes with a most likely duration of 70 minutes. When
the truck arrives at the treatment plant it waits until it secures a dump
bay of which there are only two available at the plant. Once the bay is
secured, the truck requires roughly 7 to 13 minutes to complete the unloading
at which point it leaves the bay. Trucks also require about 5 minutes of
maneuvering time when they arrive and leave the unloading area, during
which time the bay is unavailable to other trucks. The waste treatment
plant charges $13.50/m3 for discharged sewage from haul trucks. The cost of
trucking per trip (loading, transporting and dumping excluding plant fees)
is approximately $190.00/truckload.
In addition to the trucks arriving from the new development, there are
also trucks arriving at the treatment plant from other areas. Previous anal-
ysis has shown that the arrival of these trucks is a Poisson process with an
average interarrival time of 14.3 minutes.
The developer is concerned that the upgrades promised by the munici-
pality may not be in place by the time required. The sewage disposal costs
are increasing with time and may require a new transfer tank. We have been
hired to help him assess the situation:

1. Based on the rates projected, forecast the cost associated with hauling
the sewage as the development progresses over a 3 year (1, 080 day)
period.

2. Determine if there is a point in time where the new home sales have to
stop since the sewer handling has reached its limited.
6.5. EXAMPLE: SANITARY SEWER HANDLING 197

6.5.1 Solution Overview


We will develop a combined discrete event and continuous model to solve
this problem. The discrete event portion of the model will be responsible for
four things:

1. modelling the loading and dump areas as resources,

2. the creation of houses over time,

3. the travel of trucks to and from the processing plant, and

4. the modelling of other trac at the processing plant.

The continuous portion of the model will be responsible for two things:

1. the ow of sewage from the houses to the transfer tank, and

2. the ow of sewage from the transfer tank to the trucks.

The model will make use of two global variables to control the ow of sewage
in the continuous portion. The GX(0) variable will be used to track the
number of houses sold; thus the rate of sewage ow into the transfer tank
will be given by GX(0) × 1.2 m3 /day. The GX(1) variable will be used to
control the ow of sewage from the transfer tank to a sewage hauling truck.
When a truck is being loaded this variable will be set to a value of 1, and
when a truck is not present it will be set to 0; thus the rate of sewage ow
out of the transfer tank will be given by GX(1) × 0.9 m3 /min. The ow of
sewage is illustrated in Figure 6.7.

Transfer
Houses Truck
Tank
GX(0) × 1.2 m³/day GX(1) × 0.9 m³/min

Figure 6.7: Sewage Flow Schematic

In the problem description, rates and activity durations are specied in


terms of either days or minutes, so it would make sense to use one of these
as the time unit for our model. In our case, we will use days, because (for
the most part) we want to see our results in terms of days. We will therefore
198 CHAPTER 6. CONTINUOUS SIMULATION

need to convert all of the rates and durations expressed in minutes to days
by either multiplying or dividing by the factor 24 × 60 = 1, 440. Table 6.1
summarizes the results of these calculations. The model will be congured
to terminate at the 1, 080 day mark by setting the MaxTime property of the
scenario to 1, 080.

Table 6.1: Converted Rates and Durations

Activity Rate or Duration


Truck loading 1296 m3 /day
Truck travel Triangular(0.034722, 0.055556, 0.048611) days
Truck maneuvering 0.003472 days
Truck unloading Uniform(0.004861, 0.009028) days
Additional trac Exponential(0.009931) days

Finally, since several of the durations specied in the model are stochastic,
the model should be run multiple times to obtain a range of results. In our
example, we will set the RunCount property of the scenario to 30.

6.5.2 Discrete Event Part


We'll begin our discussion of the model, with the discrete event portion,
which is shown in Figure 6.8.
The rst thing the discrete event portion of the model is responsible for is
modelling the loading and dump areas as resources. It accomplishes this with
two Resource elements: one representing the loading area at the transfer tank
(with 1 server), and the other representing the dump bays at the processing
plant (with 2 servers). Each resource has its own associated File, so queuing
statistics can be tracked independently.
Next, the discrete event part needs to model creation of houses over time.
This is done using a Create element to generate entities that represent houses.
The element is congured to create 198 houses. The rst house is created at
the 6 day mark, and the interval between subsequent houses is given by the
following code:
Select Case TimeNow
Case Is < 180
Return 6
Case Is < 360
6.5. EXAMPLE: SANITARY SEWER HANDLING 199

Figure 6.8: Discrete Event Portion of the Model

Return 3
Case Is < 540
Return 2.5
Case Else
Return 5
End Select

This code ensures that houses are created as specied in the problem state-
ment. For example, if the current simulation time is less than 180 days, the
time between creation will be 6 days. Thus a total of 180 ÷ 6 = 30 houses
200 CHAPTER 6. CONTINUOUS SIMULATION

will be created during that time. Similarly, if the current simulation time is
bigger than (or equal to) 180 days and less than 360 days, the time between
creation will be 3 days. So a total of (360 − 180) ÷ 3 = 60 houses will be
created during that time.
Once a house entity has been created, it passes through an Execute ele-
ment labelled Add House and is then destroyed. The code dened for the
Execute element is:
GX (0) = GX (0) + 1
CollectStatistic (" Houses " , GX (0))
Return True

This code rst increments the global variable tracking the total number of
houses, and then collects the new value into a Statistic element labelled
Houses. The Interpretation property of the Houses statistic is set to
ContinuousVariable, and it can produce a time chart describing the total
number of houses over time.
Next, the discrete event part of the model is required to model the travel
of the sewage hauling trucks. The four trucks are modelled as entities and
are created at the start of simulation by a Create element. They are then
immediately routed to a Capture element where they attempt to capture the
loading area. Once the loading area has been successfully captured, the truck
entity passes through an unconstrained task labelled Prepare to Load that
models the maneuvering of the truck into the loading area.
After the truck is in position to be loaded, it enters a Valve element
labelled Block. Initially, the state of this Valve element is closed. As
simulation progresses, the state (open vs. closed) of this valve is controlled
by the continuous portion of the model: when there is sucient sewage in
the transfer tank to ll a truck, the valve will be open, and when the level
of sewage in the tank is less than the capacity of a truck, it will be closed.
By operating in this way, the valve ensures that a truck cannot begin to load
until there is sucient sewage in the tank to ll it.
Once the truck has been permitted to load, it enters an Execute element
labelled Start Loading that contains the code:
GX (1) = 1.0
Return True

This code starts the ow of sewage into the truck by setting the GX(1) global
variable to the value 1. Next, the truck entity enters a Valve element labelled
Load that has its InitialState property set to Closed and its AutoClose
6.5. EXAMPLE: SANITARY SEWER HANDLING 201

property set to 1. This valve is controlled by the continuous portion of the


model, and will be opened as soon as the truck is lled2 .
Upon completion of the loading process, the truck leaves the Load valve
(causing it to automatically shut) and enters an Execute element labelled
Finish Loading that contains the code:
LX (1) = GetStockValue (" Truck ")
SetStockValue (" Truck " , 0)
GX (1) = 0.0
Return True

The rst thing this code does, is read the current value of the Stock element
labelled Truck (located in the continuous part of the model) and assigns
the value to the LX(1) attribute of the entity. We do this so we can record
the correct dumping cost when the truck is unloaded at the processing plant.
Next, the value of the Truck stock is reset to zero in preparation for loading
the next truck. Finally, the GX(1) global variable is set to the value 0, causing
the ow of sewage to the truck to cease. The truck then proceeds to an
unconstrained task labelled Prepare to Haul that models the maneuvering
of the truck out of the loading area. After this, the truck releases the loading
area, which becomes available to other trucks.
At this point the truck begins its journey to the processing plant, which
is modelled by an unconstrained task labelled Haul. Once at the processing
plant, the truck enters a Capture element wherein it attempts to capture one
of the dump bays. After a dump bay is obtained, the truck passes through
unconstrained tasks that model maneuvering in preparation to unload, the
unloading process itself, and maneuvering out of the dump bay. Thereafter,
the truck releases the dump bay via a Release element.
Next, the truck passes through a conditional branch that lters out the
additional trac to the processing plant (discussed in a moment) and then
passes through a CostCollect element that records the cost of dumping the
sewage. The unit cost for this is $13.50 and the quantity is given by the code:
Return LX (1)

The truck then passes through an unconstrained task modelling its return to
the loading area, a CostCollect element that registers the $190.00 in trucking
costs, and nally begins its cycle anew.
2
This is an example of a common strategy for modelling continuous activities. For
more information see Section 6.7.1
202 CHAPTER 6. CONTINUOUS SIMULATION

The last responsibility of the discrete event portion is to model additional


trac to the processing plant. This is accomplished by using a Create ele-
ment to introduce additional truck entities into the truck cycle at the point
where trucks attempt to capture a dump bay. Of course, the LX(1) attribute
of these new trucks will be set to the default value of zero, so they are easily
ltered out and destroyed after they have released their dump bay by a Con-
ditionalBranch element with the following code for its Condition property:
Return LX (1) > 0

6.5.3 Continuous Model


We'll now consider the continuous portion of the model, which is shown in
Figure 6.9.
The continuous portion contains two Stock elements: one representing
the amount of sewage in the transfer tank, and the other representing the
amount of sewage in the truck currently being loaded (if any). The sewage
being produced by the houses is represented by a Source element. Both Stock
elements have an initial value of 0 and a recording interval set to once per
day.
The ow of sewage is modelled by two Flow elements: one representing
the ow from the houses to the transfer tank, and the other representing the
ow from the transfer tank to a truck. The rate of ow from the houses to
the transfer tank is given by the following code:
Return GX (0) * 1.2
While the rate of ow from the transfer tank to a truck is given by:
Return GX (1) * 1296.0
These formulas are (of course) identical to those given in Figure 6.7.
The continuous model also contains four Watch elements. The rst is
responsible for detecting when a truck has completed loading. It is observing
the Truck stock, and watching for the threshold 8.5 m3 to be passed in
a positive direction within a tolerance of 0.5 m3 . When this element is
triggered, an entity passes through an Activator element causing the Valve
labelled Load in the discrete portion of the model to be opened, thus,
releasing the waiting truck.
The second Watch element is responsible for detecting when the level of
sewage in the transfer tank is high enough for trucks to load. It is observing
6.5. EXAMPLE: SANITARY SEWER HANDLING 203

Figure 6.9: Continuous Portion of the Model

the Transfer Tank stock, and watching for the threshold 9.5 m3 to be passed
in a positive direction within a tolerance of 0.5 m3 . When this happens, an
entity passes through an Activator element that will open the Valve labelled
Block in the discrete portion of the model, thus, permitting trucks to load.
The third Watch element is responsible for detecting when the level of
sewage in the transfer tank is too low for trucks to load. It is observing the
Transfer Tank stock, and watching for the threshold 9.5 m3 to be passed
in a negative direction within a tolerance of 0.5 m3 . When this happens, an
entity passes through an Activator element that will close the Valve labelled
Block in the discrete portion of the model, thus, preventing trucks from
loading.
The nal Watch element is responsible for detecting if the transfer tank
overows into the holding pond. It is observing the Transfer Tank stock,
204 CHAPTER 6. CONTINUOUS SIMULATION

and watching for the threshold 57.0 m3 (95% of the tank's capacity) to be
passed in a positive direction within a tolerance of 3.0 m3 . When this hap-
pens, an entity will pass through a Counter element, which registers the fact
that an overow occurred. The simulation time at which the tank rst over-
owed can therefore be obtained from the FirstTime statistic of the Counter.

6.5.4 Results
The costs reported by the model (summarized across all 30 runs) are shown
in Figure 6.10. As the standard deviation reported is quite small, we can
safely assume that the costs will be approximately $5.88M.
To determine if (and when) the proposed sewage handling strategy reaches
its limit, we need to examine the FirstTime statistic of the Overow
Counter element. This statistic tracks the time at which an entity rst passed
through the counter. For a single run, this statistic will only ever contain
one observation (or zero observations if no entities passed the counter); how-
ever, when summarized across all runs, the statistic can report the average
time at which an entity was rst seen, the earliest time, the latest time, and
so forth. It can also generate a histogram or cumulative distribution chart.
The average time reported by the counter in our model is approximately 686
days, with an earliest time of approximately 575 days and a latest time of
approximately 813 days. The cumulative distribution chart is shown in 6.11.
Clearly, the proposed strategy will not be able to keep up during the nal 18
months of the project.

Cost Report
Date: Thursday, June 18, 2015
Project: Model
Scenario: Truck Hauling
Run: All Runs

Sewer Handling Costs


Standard
Description Mean Minimum Maximum
Deviation
Dumping $2,259,602.93 $32.27 $2,259,530.10 $2,259,674.60
Trucking $3,622,717.33 $492.86 $3,621,590.00 $3,623,870.00
Subtotal $5,882,320.26 $487.08 $5,881,211.75 $5,883,431.82

Grand Total $5,882,320.26 $487.08 $5,881,211.75 $5,883,431.82

Figure 6.10: Sewer Handling Costs


6.5. EXAMPLE: SANITARY SEWER HANDLING 205

Cumulative Probability 100 %

80 %

60 %

40 %

20 %

0%
500 550 600 650 700 750 800 850
Simulation Time (days)

Figure 6.11: Cumulative Probability of Overow by Day

6.5.5 Embellishment One


Suppose now that the developer has an option to build a concrete tank that
has a capacity of 350 m3 . The cost of this addition will be $1.5M. The tank
is connected to the municipal sewer system through a force main sewer and
a small pump station. The sewer line is 8 inches and the rate of ow from
the tank when activated is 1.2 m3 /min. The pump is programmed to start
pumping when the tank is 80% full and shuts o when the tank reaches 5%
of its volume. We'd like to determine if this design is suitable and how many
houses it will support, and at which cost the tank will break-even with the
cost of hauling sewers will.

6.5.6 Embellishment Two


A contractor that won a tender to design-build a gravity based sanitary
sewer trunk to service the new development is weighing the risks and benets
associated with delays he/she is expecting to complete the tunnel work and
an accompanying pump station. The project was awarded July 1, 2014 and
was expected to be delivered by December 1, 2015. There are clauses in
the contract that stipulate that should the project not be in service by the
delivery date, the contractor will be charged liquidated damages equal to
the cost of any sewage hauled from the new development. If the cost of
206 CHAPTER 6. CONTINUOUS SIMULATION

accelerating construction is about $6,000 per day, would it make sense for
him to take the penalty or accelerate construction?

6.6 Example: Tunnel Construction


The purpose of this example is to demonstrate how certain tasks of an op-
eration can be modelled using a combined discrete-continuous simulation
approach. The example is also used to show how interruptions for tasks
modelled continuously, can be represented. A tunnel construction project is
selected as a case study. The discrete event model discussed in Chapter 5 and
a real tunnel construction project, SA1A, served as the basis for developing
the example.

6.6.1 The SA1A Tunnelling Project


Tunnel construction using Tunnel Boring Machines (TBMs) is typically car-
ried out over a signicant length and takes a lot of time to complete. The
operation is characterized by cyclic processes that interact with each other.
Any disruption to a cycle causes most, if not all, of the other cycles to be
halted, hence, losing the rhythm achieved while in steady state. Regaining
steady state conditions for equipment intense operations such as this one
takes a signicant amount of time. Time lost during start-up and any down-
time add up to signicant delays in cases where these stoppages are frequent,
resulting in schedules and costs being overrun.
Consequently, it is imperative to identify the dierent types of delays that
could be experienced on TBM tunnelling projects, the relative impact of their
occurrence on cost and schedule, and possible mitigation strategies for those
with signicant impact. In this example, we will introduce the concept of
delays, the dierent types of delays encountered in tunnelling, and assess
their impact on project cost and schedule using a simulation model.
A delay, in the context of this example and consistent with the denition
in Adrian and Boyer (1976), can be dened as any interruption to the progress
of tunnelling. That is, any event or situation for which tunnelling must cease,
outside the normal operation of the tunnelling cycle. A list of most frequently
occurring delays on most TBM Tunnel construction projects include:

ˆ TBM,
6.6. EXAMPLE: TUNNEL CONSTRUCTION 207

ˆ TBM hydraulic system,

ˆ TBM electrical system,

ˆ TBM water system,

ˆ crane interruptions due to bad weather or other reasons,

ˆ rocks,

ˆ voids and PVC-as builts, and

ˆ surveying.

With the exception of surveying, all other delay types cannot be sched-
uled or anticipated beforehand. In other words, more realistic and precise
information about delays in this category can only be obtained when the
project has commenced. However, this is not a problem because most tun-
nelling projects take a long time to complete; hence, it would still be possible
to perform meaningful simulation-based delay analysis using data from the
site, which would benet the project.
Details of delays, also referred to here as delay information, that need to
be established on a project-by-project basis include: verication of whether
a given delay exists, its interarrival times, and its duration. There is a sci-
entically proven systematic method for obtained this information: Method
Productivity Delay Modelling (MPDM). MPDM is a technique that was
proposed by Adrian and Boyer (1976) for measurement, prediction and im-
provement of a project's productivity in relation to the amount of delay
experienced. MPDM was applied to a real TBM tunnelling project to obtain
information similar to the above.
A description of the project is presented along with related process details
and the delay information obtained for it. Thereafter, a combined discrete-
continuous simulation model for the tunnelling process is presented with
these delays embedded within it. Results of the experimentation done with
the simulation model to investigate the inuence of delay dynamics on the
tunnelling process are then presented and discussed.
The TBM tunnel studied, the SA1A project, is one segment of a larger
municipal project, the South Edmonton Sanitary Sewer (SESS) overall strat-
egy, connecting the SW1 pump station at Ellerslie Road and Parsons Road
208 CHAPTER 6. CONTINUOUS SIMULATION

to Stages SA1b&c. The alignment runs north from the intersection of Par-
sons Road and Ellerslie Road, then turns northeast before nally crossing
the Anthony Henday as well as 91 St NW. See Figure 6.12.
This described alignment crosses the transportation and utility corridor
that a number of existing pipelines including a Nova Chem 273 mm HVP
line and an ATCO Gas 508 mm line. The tunnel section of interest along this
alignment is approximately 706 m and is to be constructed using an M100
TBM (M17). The project timeline was approximately one year. Details of
this tunnel and activities based on the way the construction process were set
up on site are summarized in Table 6.2.

Table 6.2: Details of the SA1A TBM Tunnel

Parameter Value
Tunnel length 700 m
Non-delayed production tunnelling rate 0.45 m/hr
Train travel to TBM 4 km/hr
Train return 3.5 km/hr
Unload liners 15 minutes
Unload spoil 15 minutes
Load new liners 6 minutes
Install liners 24 minutes
Reset TBM 15 minutes
Surveying done every 90 m 8 hours
Track extension every 6 m 4 hours

The consulting company, SMA Consulting, that provided project man-


agement and residence engineering services on the project, used the MPDM
technique to collect data for the delays experienced on the project. Details
of this information are summarized in Tables 6.3 and 6.4.
Exponential distributions were conveniently chosen when performing in-
put modelling on the data collected from site for delay interarrivals and du-
rations of each type. This is because this type of distribution models highly
uncertain events such as these more precisely.
Due to the inaccuracy of the rock and void/PVC delay interarrivals, they
were modied prior to this analysis so that their baseline interarrivals are
increased by 40% and 60% respectively (to reduce the occurrences of delays).
6.6. EXAMPLE: TUNNEL CONSTRUCTION 209

Figure 6.12: Alignment of the SA1A Tunnel (Courtesy of SMA Consulting)


Table 6.3: Details of the SA1A TBM Tunnel
CONTINUOUS SIMULATION

Statistics Delay Type

Miscellaneous
TBM Water

Surveying
Hydraulic

Electrical

As-Builts
and PVC
Cleaning

Weather
Systems

Systems

Systems

Delays
Crane

Rocks
Voids
TBM
TBM

TBM

4.00 2.50 1.00 5.00 TBM


2.00 2.00 4.00 3.00 8.00 2.00
1.00 2.00 1.00 3.99 4.00 4.00 1.00 8.00 3.00
4.00 2.00 3.00 7.99 8.00 1.00 8.00 2.50
1.99 5.00 1.00 3.00 3.00 2.00 4.00 4.00
1.00 2.00 2.00 10.00 1.00 2.00 10.00
1.00 5.00 2.00 5.00 5.00 3.00 8.00
CHAPTER 6.

1.50 2.00 4.00 2.03 8.00


2.00 2.00 15.00 6.96 8.00
5.00 3.00 2.00 8.00
4.97 5.00 12.00 8.00
3.00 2.04 8.00 8.00
5.00 2.00 3.00 5.00
4.00 2.00 3.00 4.00
2.00 2.00
2.00 10.00
20.00 2.00
3.00 20.00
 15.00
 10.00
 3.00
 3.00
Count 17 21 8 2 6 0 6 1 13 13 4
210

Average 3.85 4.84 3.63 4.50 5.33 0.00 3.83 4.00 3.77 7.31 2.88
6.6. EXAMPLE: TUNNEL CONSTRUCTION 211

Table 6.4: Statistical Distributions for Modelling the Dierent Delay Types
Delay Type Time Between Delay (hrs) Delay Duration (hrs)
TBM Exponential(117) Exponential(3.85)
TBM Hydraulic Systems Exponential(90.41) Exponential(4.84)
TBM Electrical Systems Exponential(231.13) Exponential(3.63)
TBM Water Systems Exponential(415.50) Exponential(4.50)
Cleaning TBM Exponential(341.50) Exponential(5.33)
Surveying Exponential(341.50) Exponential(3.83)
Weather/Crane Exponential(424.50) Exponential(4.00)
Rocks Exponential(88.18) Exponential(3.77)
Voids and PVC As-builts Exponential(58.29) Exponential(7.31)
Miscellaneous Delays Exponential(419.75) Exponential(2.88)

6.6.2 Simulation Modelling Strategy


A combined continuous-discrete event simulation approach was adopted in
modelling the TBM tunnelling operation. The continuous part of the model
was used to emulate the excavation process of the TBM, and the travel of
the train between the working shaft and tunnel face. The other parts of
the tunnelling process were modelled discretely for example the unloading of
liners, other aspects of the TBM cyclelining and resetting, ooading spoil
and loading new liners onto the train. Other processes were modelled using
a combined continuous and discrete event approach such as surveying, track
extension, and delays. Each of these segments of the model will be described
in detail in the following sections.
Trains were abstracted as entities in this example. These entities served
as virtual entities that owed through the discrete event model, triggering
the scheduling and processing of simulation events that emulated the tun-
nel construction process. There were also other entities within the model,
which were used to emulate surveying work, track extension and other delays
encountered on the project.
The train track, the TBM, and the crane are explicitly modelled as re-
sources using the Resource modelling elements available in the GPT, each
having only one server. These Resource elements were each connected to
a unique File modelling element to emulate the queuing of entities for the
resource.
There were a number of parameters that needed to be explicitly repre-
sented in order to facilitate the modelling process. These were stored within
212 CHAPTER 6. CONTINUOUS SIMULATION

local and global attributes of the model. They have been summarized in
Table 6.5.
The global attributes GX(10) through to GX(17) were all initialized to
1.0 prior to the start of simulation by embedding user-written code within
the Initialize property of the scenario. This had the eect of ignoring delay
impacts on TBM excavation until these delays actually occurred.

6.6.3 Discrete Event Simulation Models


The solution to this modelling example is comprised of three main parts: 1)
a discrete event model (referred to as the main DES model) that emulates
ooading trains, loading liners, tunnel lining, resetting the TBM, surveying,
and track extension, 2) another discrete event model that emulates the oc-
currence of un-anticipated delays, 3) continuous models that emulate train
movement and the TBM excavation process. Each of these models are dis-
cussed in the following sections.

The Main Discrete Event Simulation Model


With the exception of the two composite modelling elements, the model lay-
out shown in Figure 6.13 represents the discrete event portion of the combined
model put together for simulating the tunnel construction process. The Task
modelling elements that were used in the previous pure discrete event version
of the model (see Figure 5.21) have been replaced by three Valve modelling
elements labelled Halt Entity for Continuous Travel, Halt Entity until Con-
tinuous Excavation is Complete, and Halt Entity for Continuous Return,
and highlighted in pink, deep blue and pink ll colours, respectively.
In this model, train entities represent the main virtual entities. The
two Valve elements with pink ll are used to halt the train entity until the
continuous travel process from the working shaft to the tunnel face and vice
versa are completed. The valve that has a deep blue ll is used to halt the
train entity for the time that the TBM excavation is proceeding continuously.
At time zero, the Create element labelled Create Train Entities releases
two entities each representing a train. These two are routed into a Capture
element labelled Capture Track and each make a request for the train track
resource. The rst train entity that makes a request is granted the train
track resource and is routed out of the Capture element. The second train
entity is queued within the File element labelled TrackQ until the train
6.6. EXAMPLE: TUNNEL CONSTRUCTION 213

Table 6.5: Details of Local and Global Attributes


Attribute Parameter it represents
LX(0) - Train Entities A place holder for a time stamp that represents
the start of a train cycle
LX(1) - Delay Entities A time stamp that represents the start of a delay
event
LX(2) - Delay Entities A place holder for the value for the duration of the
delay
GX(0) A ag that indicates whether or not a train is at
the tunnel face so that continuous TBM can pro-
ceed or be halted
GX(1) A threshold used to raise state events in relation
to the Excavated Tunnel Length state variable
GX(2) A switch that indicates whether the train is trav-
eling from the working shaft to the tunnel face or
not
GX(3) A switch that indicates whether the train is trav-
eling from the tunnel face to the working shaft or
not
GX(4) A ag that indicates whether the target tunnel
length has been fully excavated
GX(5) A variable representing the train track length. It
is also used to derive the threshold value used to
schedule track extension state events
GX(6) A variable used to derive threshold values which
in turn are used to schedule surveying state events
GX(10) A switch that indicates whether a TBM delay is
on or not
GX(11) A switch that indicates whether a TBM Hydraulic
delay is on or not
GX(12) A switch that indicates whether a TBM Cleaning
delay is on or not
GX(13) A switch that indicates whether a TBM Electrical
System delay is on or not
GX(14) A switch that indicates whether a TBM Water
System delay is on or not
GX(15) A switch that indicates whether a Rock delay is
on or not
GX(16) A switch that indicates whether a Voids/PVC de-
lay is on or not
GX(17) A switch that indicates whether a Miscellaneous
delay is on or not
214 CHAPTER 6. CONTINUOUS SIMULATION

Figure 6.13: Discrete Event Portion of the SA1A Model


6.6. EXAMPLE: TUNNEL CONSTRUCTION 215

track resource becomes available. The rst train entity is routed into a
ConditionalBranch element labelled GX(4) = 1.0?. The condition for this
element is evaluated to determine whether the full tunnel length has been
excavated. In case the full length has been excavated, the train entity is
routed out the True branch into a Release element, labelled Release Track,
causing it to release the train track resource, after which it is destroyed.
Otherwise, the train entity is routed out the False branch into a Valve
element labelled Halt Entity for Continuous Travel where it is halted. On
transfer into this Valve element, the train entity triggers the evaluation of
the formula in the incoming trace property activating the model that mimics
the continuous travel of the train from working shaft to tunnel face. At
the end of the continuous travel to the tunnel face, the Valve is opened by
an Activator releasing the train entity, which ows into a Capture TBM
element. As the entity leaves the Valve, it causes it to close as a result of its
auto close property being set to 1.0. The train entity requests the TBM
resource when routed into the Capture TBM element. If the TBM is busy,
the train entity will be queued in the File labelled TBMQ. When the TBM
resource becomes available, the train entity is routed out of the Capture
TBM element into the Generate element where it gets cloned.
This cloning process emulates unloading of liners taking place concur-
rently with TBM excavation. The original entity that possesses the captured
TBM, i.e., the train entity, is routed out through the top branch of the Gen-
erate element into a valve labelled Halt Entity until Continuous Excavation
is Complete where it is halted.
The clone entity that represents liners, which need to be ooaded from
the train is routed out through the bottom branch into a Task element la-
belled UnloadLiners where it is delayed for 15 minutes before proceeding
to another Generate element labelled Generate2. At this element, another
clone represents TBM lining and resetting tasks. The original entity that
represents the unloading of liners is then routed into a Consolidate element
labelled Consolidate and is halted there if there is no entity waiting at the
top branch. The cloned entity that represents the TBM lining and resetting
tasks is routed out the bottom branch of the Generate2 element into the
Capture TBM2 element where it makes a request of the TBM resource. If
the TBM is still engaged in the continuous excavation task, i.e., assigned to
the train entity that is halted at the Halt Entity until Continuous Excava-
tion is Complete, the TBM lining and resetting clone entity will be queued
in the File labelled TBMQ.
216 CHAPTER 6. CONTINUOUS SIMULATION

When the train entity is transferred into the Halt Entity until Continuous
Excavation is Complete valve, it evaluates the formula within its incoming
trace activating the simulation of 1 m TBM advancement in a continuous
fashion. At the end of the continuous excavation cycle, a state event is
triggered, which causes an activator to open this valve. This results in the
release of the train entity, which prompts the valve to close. This entity is
then routed into a Release TBM element where it releases the TBM resource
before it is transferred into the top branch of the Consolidate element.
The release of the TBM resource and routing of the train entity into the
`Consolidate element concurrently trigger two events. The rst is related to
the TBM lining and resetting clone entity being granted the TBM resource.
The other involves consolidation of the train entity and the clone entity that
represents the unloading of liners. The consolidated train entity is then
routed into a Valve element labelled Halt Entity for Continuous Return
where it triggers the simulation of the train movement back to the working
shaft using a continuous approach.
This ag is used to indicate to the trains at the working shaft that TBM
excavation is complete, hence, there is no need to travel to the tunnel face.
When this point is reached, each train that is granted the train track resource
is routed out the true branch of the GX(4) = 1.0? ConditionalBranch
element, into a Release element where it releases the train track resource and
then gets destroyed by being routed into the Destroy element, i.e., Destroy6.
Destroying all train entities does not mark the end of simulation events.
This is because the delay entities keep cycling in their respective sub-models
resulting in continuous scheduling and processing of events. Consequently,
the simulation model was setup to terminate when the last loaded train from
the TBM returns to the working shaft, i.e., the last returning train entity is
routed into the Counter modelling element labelled Terminate Simulation.
A value of one is set for the Limit property for this modelling element to
achieve this simulation termination eect.

6.6.4 Continuous Simulation Models


Elements within Simphony.NET that provide continuous simulation services
are used for modelling specic processes associated with the overall TBM
tunnel construction operation. These processes included: the TBM excava-
tion activity at the tunnel face, and the travel of train between the working
shaft, and the tunnel face.
6.6. EXAMPLE: TUNNEL CONSTRUCTION 217

In the continuous models, the state variables associated with the above
enumerated processes are modelled using Stock modelling elements. The
rate of change of these stock variables are modelled by connecting Flow
elements to the appropriate stocks. Communication between the discrete
event models and the continuous models are achieved using Watch elements,
global attributes (as switches), Valves and Activators. Detailed explanations
of each model layout used to simulate processes continuously are presented
next.

Continuous Model for TBM Excavation


To achieve the continuous excavation of the tunnel, the model layout shown
in Figure 6.14 together with a Valve modelling element labelled Halt Entity
until Continuous Excavation is Complete in Figure 6.13 were used to replace
a Task modelling element labelled Excavate in Figure 5.21 that modelled
the TBM excavation cycle discretely. The TBM excavation rate is modelled
as a ow while the excavated tunnel length is modelled as a stock variable.

Figure 6.14: Continuous Model for Simulating TBM Excavation

In the discrete case, the time required for the TBM to advance 1 m
was computed as a quotient of length advanced, i.e., 1 m and the TBM
penetration rate. These values were used to schedule simulation events that
emulated each excavation cycle for the TBM. In the continuous case, the
TBM penetration rate is used to derive the distance that the TBM advances
218 CHAPTER 6. CONTINUOUS SIMULATION

in each cycle. Consequently, the TBM penetration rate is modelled as a Flow


while the excavated tunnel length is modelled as a Stock.
In this TBM tunnelling example, the continuous model is activated when
a train entity captures a TBM resource and is transferred into a Valve element
labelled Halt Entity until Continuous Excavation is Complete within the
discrete event model part. This activation is achieved by setting the GX(0)
switch to a non-zero value in the formula of incoming trace property of the
Halt Entity until Continuous Excavation is Complete Valve in Figure 6.13.
This switch (GX(0)) is referenced within the formula (shown below) for
the TBM Excavation Rate ow, thus, having an eect of activating or
de-activating the simulation of the TBM excavation process continuously.
Return GX (0)* GX (10) * GX (11) * GX (12) * GX (13) * GX (14) _
* GX (15) * GX (16) * GX (17) / 60.0

The other switches, i.e., other than GX(0), relate to the occurrence of
interruptions that result in a delay or delays of the TBM excavation process.
The Stock labelled Excavated Tunnel Length maps to a state variable that
represents the distance that the TBM has advanced in each excavation cycle.
The tunnelling process is setup such that the TBM advances 1 m in each
cycle. Given that the Excavated Tunnel Length stock valve is cumulated
each time this continuous model is activated, a method to notify the simula-
tion engine that the 1 m advancement has been achieved was needed so that
a state event can be raised at the right point in time. In order to achieve
this, a separate target variable, i.e., GX(1) attribute, was designated as the
threshold. This threshold was recursively set 1 m ahead of the Excavated
Tunnel Length stock valve at the start of each excavation cycle by embed-
ding the following formula within the incoming trace property of the Valve
element labelled Halt Entity until Continuous Excavation is Complete.
GX (1) = GetStockValue (" Excavated Tunnel Length ") + 1.0

A Watch element labelled TBM Excavation Cycle Watch (1 m) was


congured to look out for the state event related to the TBM achieving the
1 m advancement. This setup involved specifying GX(1) as the threshold,
Excavated Tunnel Length Stock as the state variable to watch for, and 0.05
as a tolerance. A state event would be raised by the simulation engine when
the Excavated Tunnel Length state variable crosses the GX(1) threshold
from below within a tolerance of 0.05. In response to this state event, the
Watch element labelled TBM Excavation Cycle Watch (1 m) creates an
6.6. EXAMPLE: TUNNEL CONSTRUCTION 219

entity. This entity ows into a Valve Activator (labelled Release Train Entity
after Excavation Cycle) and nally into a Destroy element.
The transfer of the entity into the Release Train Entity after Excava-
tion Cycle activator has the eect of opening a Valve in the discrete event
model labelled Halt Entity until Continuous Excavation is Complete, hence
releasing the train entity that it was retaining. As the train entity is being
transferred out of this Valve, it triggers the evaluation of the formula within
its outgoing trace property. This causes the de-activation of the continuous
excavation model (i.e., sets GX(0) to zero) and increment of the threshold
value (i.e., GX(1)) by 1 m. This formula was presented previously when
discussing the threshold for the continuous excavation process.
The activation and de-activation of this continuous excavation model is
repeated in the course of the simulation until the total excavated length of the
tunnel, i.e., value for the Stock labelled Excavated Tunnel Length, crosses
a threshold of 706 m from below within a tolerance of 0.001 m. At the start
of simulation, the value of this stock variable was set to 4.0 m.
The Watch element labelled Tunnel Excavation Watch (706 m) is con-
gured to look out for the state event related to this. When this state event
is raised, the Tunnel Excavation Watch (706 m) Watch creates an entity
that ows through an Execute element and ends up in a Destroy element.
When in, this entity is transferred into the Execute element labelled GX(4)
= 1.0, it sets the GX(4) attribute to 1.0. This ag is used to indicate to the
trains at the working shaft that TBM excavation is complete, so there is no
need to travel to the tunnel face.

Continuous Model for Train Travel


The movement of the train between the working shaft and tunnel face is
modelled continuously using the model layout shown in Figure 6.15. This
continuous model layout, together with the Valves labelled Halt Entity for
Continuous Travel and Halt Entity for Continuous Return, in Figure 5.21
were used to replace Task modelling elements labelled Travel and Return
which modelled the train travel process discretely in the model version shown
in Figure 5.21.
In the previous approach, i.e., pure discrete event simulation for TBM
tunnelling, travel time was computed as a quotient of train track length and
train speed. This value for the time variable is what was used to sched-
ule train travel events. In the current modelling approach, i.e., continuous
220 CHAPTER 6. CONTINUOUS SIMULATION

simulation, train speed is used to derive the distance travelled by the train.
Therefore, train travel speeds are modelled as ows while the distance/loca-
tion is modelled as a Stock. The approach used involves cumulating the stock
variable as the train travels from the working shaft to the tunnel face and
depleting this same stock variable as the train returns from the tunnel face
to the working shaft. The initial value of the Train Location Stock was set
to zero, positioning the train at the working shaft at the start of simulation.

Figure 6.15: Continuous Model for Train Travel

In the discrete event part of the model for this example (see Figure 6.13),
trains are modelled as the main virtual entities. The continuous travel of
trains from the working shaft to the tunnel face was activated as soon as
a train entity was granted the train track resource. A train entity granted
a train track resource would be transferred into a Valve modelling element
labelled Halt Entity for Continuous Travel in Figure 6.13. The transfer of
a train entity into this Valve evaluates a formula in which the GX(2) switch
is set to a value of one. This has an eect of activating the Flow element
labelled Train Travel Rate to TBM resulting in the continuous travel of
the train to the tunnel face. The following formula is embedded within the
Train Travel Rate to TBM to evaluate the rate of travel of the train to the
tunnel face.
Return GX (2) * 4000.0 / 60.0

The train entity is halted at this Valve until a state event related to the
arrival of the train at the tunnel face is raised. The state event related to the
6.6. EXAMPLE: TUNNEL CONSTRUCTION 221

train arriving at the tunnel face is raised when the value of the Stock labelled
Train Location crosses a location threshold from below within a tolerance
of 0.05. The length of the train track, i.e., GX(5) is used as a threshold for
the train arrival at tunnel face state event. The Watch modelling element
labelled Watch for Train Arrival at Tunnel Face looks out for this state
event and creates an entity in response to this event. This entity is routed
into an Activator modelling element labelled Opens Travel Valve, then to a
Trace modelling element and nally into a Destroy. As this new entity ows
through the Activator, it opens the valve labelled Halt Entity for Continuous
Return in the discrete event model part.
As the train entity is transferred out of the Halt Entity for Continuous
Travel Valve in the discrete event model portion, it evaluates a formula in
the outgoing trace property that sets the GX(2) switch to a value of zero.
This has the eect of deactivating the Flow labelled Train Travel Rate to
TBM, halting the simulation of continuous travel (i.e., the accumulation of
Train Location Stock value) to the tunnel face.
This train entity then ows to subsequent modelling elements that trigger
the ooading of liners, excavation, and lining of the next 1 m tunnel section.
After all these three tasks are completed in the discrete event model part,
the train entity is transferred into another valve labelled Halt Entity for
Continuous Return where it is halted until its continuous return to the
working shaft is completed. A similar design pattern is used to continuously
model the return of the train from the tunnel face. The attribute GX(3) is
used as the switch for activating or deactivating the Flow element labelled
Train Return Rate from TBM. The activation, i.e., setting GX(3) to a
value of one and the de-activation, i.e., setting the value of GX(3) to zero,
are done within the formula for the incoming and outgoing traces for the
Valve element labelled Halt Entity for Continuous Return. This activation
and de-activation of the return of the train from the tunnel face are possible
because of the following formula embedded within the Train Return Rate
from TBM Flow element that contains the GX(3) switch.
Return GX (3) * 3500.0 / 60.0

The simulation of the continuous return of the train from the TBM is
stopped when a state event that signies the arrival of the train at the working
shaft is raised. This state event is raised when the Train Location Stock
value crosses a threshold from above within a tolerance of 0.05. A threshold
of 0.05 and tolerance of 0.05 are conveniently chosen to model this state
222 CHAPTER 6. CONTINUOUS SIMULATION

event so that the depletion of the Train Location Stock value by the Train
Return Rate from TBM Flow is stopped before it drops below a value of
zero. Another new entity is created by the Watch modelling element labelled
Watch for Return from TBM. This entity ows into an Activator element
labelled Opens Return Valve then into a Trace element and nally into a
Destroy element. When the entity ows through the Opens Return Valve
Activator, it opens the Valve in the discrete event model part labelled Halt
Entity for Continuous Return triggering the release of the train entity. This
in turn triggers the onset of other discrete events but marks the end of the
processes simulated continuously in any given train cycle.

6.6.5 Delay Models


In this simulation modelling example, it was assumed that two types of delays
could occur during the construction of a tunnel. These included delays that
were scheduled prior to construction and those that were not scheduled. Each
of these delay types to the construction process are discussed next along with
details of how they were modelled.

Modelling Scheduled Delays


Scheduled delays in this example is a term used to refer to interruptions
to the main tunnel construction processes, which are planned to take place
at a specic point in the course of the project. These types of delay are
inevitable because they support the progression of the main construction
processes. Examples of these for TBM tunnelling include track extension
and surveying. In this example, it was assumed that these take 240 minutes
and 480 minutes, respectively. The actual performance of these tasks was
modelled discretely using Task elements shown in Figure 6.16.
On a tunnel construction site, track extension and surveying activities
cannot take place concurrently with any other activity. Only one can proceed
at a time. It is not uncommon for the commencement of track extension or
surveying to be delayed to allow on-going tasks to nish. To fulll these
requirements, the model layouts (see Figure 6.16) for track extension and
surveying were each setup such that they capture the train track resource
and TBM resource before proceeding.
The GX(5) attribute is used to represent the length of the track in the
simulation. Initially, this attribute was set to the value of the Stock element
6.6. EXAMPLE: TUNNEL CONSTRUCTION 223

Figure 6.16: Model for Track Extension and Surveying

labelled Excavated Tunnel Length within the formula of the initialization


property of the scenario. The following formula was embedded within the
initialization property of the scenario to set the GX(5) attribute at the start
of simulation.
Return Getstockvalue (" Excavated Tunnel Length ")

As simulation progresses, this attribute is updated in increments of 6.0


m at the end of each track extension cycle. A formula was inserted within
the outgoing trace property of the ReleaseTrack2 element to achieve this.
A threshold of GX(5) + 6.0 was used within the Track Extension Watch
element to make sure that state events are raised when the Excavated Tunnel
Length stock value (i.e., state variable) crosses this threshold from below
within a tolerance of 0.05. This setup for the threshold had an eect of
scheduling a track extension approximately after every 6.0 m. Each time,
the Track Extension Watch element creates a track extension entity in
response to these state events.
Once the train track has been captured and it is conrmed that the TBM
resource is no longer engaged in any other activity, the track extension entity
is routed into a Task element labelled ExtendTrack where it is delayed
for a duration that emulates the extension process. Thereafter, the entity
ows through a Counter labelled Count Track Extensions that increments
the number of track extension operations and then nally gets routed into a
Destroy element.
The surveying process had a similar setting to that described for the track
extension. The GX(6) attribute was used in a fashion similar to GX(5) to
model the reoccurrence of the surveying task every 90 m; i.e., a threshold
value of GX(6) + 90 was used in the Watch labelled surveying and GX(6)
incremented by 90 within the outgoing trace formula of the element labelled
ReleaseTrack3, after a surveying cycle was completed. The cumulative
224 CHAPTER 6. CONTINUOUS SIMULATION

number of surveys done was tracked by the Counter element labelled Count
Surveys.

Modelling Unscheduled Delays


Unscheduled delays include those interruptions that are anticipated to oc-
cur during construction but whose exact time and frequency of occurrence
is unknown due to their uncertain nature. This category of delays has no
contribution to the main tunnel construction processes and should be elimi-
nated or minimized. This was one of the objectives in this modelling example.
Ten unscheduled delays were modelled, each with a unique sub-model. The
model layout in Figure 6.17 summarizes this. Eight of these delays had a
direct inuence on the TBM use while the other two (the 6th sub-model and
7th sub-model) aect the surveying crew and crane resource, respectively.
In each of these sub-models, an entity that represents a specic delay
is created at time zero. This entity is then routed into a Task modelling
element that schedules the arrival of the rst delay. Exponential distributions
summarized in Table 6.4 were used to model the time till the rst delay as well
as the time between subsequent delays. After the entity is released from this
Task element, it is routed into a Preempt modelling element. Certain types of
delays are setup to aect specic types of resources. For example, Surveying
delays aect the surveying crew resource; Weather/crane delays aect the
crane resource while all the others aect the TBM resource. Depending on
the nature of the delay, the entity could preempt the appropriate resource
instantaneously or not. For example, if delays such as cleaning the TBM,
and miscellaneous delays are scheduled to take place and the TBM is busy,
these delays are halted until the TBM becomes free. The priority for the
preempt element in the sub-models for these delay types was set to a value
less than or equal to that used to request the TBM for other tasks. Other
delay types that relate to the TBM are setup to preempt the TBM resource
instantaneously. Survey delays and crane/weather delays were also set up to
instantaneously preempt the survey crew and crane resources, respectively,
only when these resources are busy at the time of their occurrence. Priorities
higher than any other used for requesting or preempting these resources in
other parts of the model, were used to achieve this behaviour.
There were two types of delays that do not aect the TBM resource.
They included Survey delays" and Crane delays". These were modelled
using a dierent strategy that involved preempting the appropriate resource
6.6. EXAMPLE: TUNNEL CONSTRUCTION 225

Figure 6.17: Model for Unanticipated Delays


226 CHAPTER 6. CONTINUOUS SIMULATION

only if it was busy at the time that the delay is scheduled to take place. A
Survey delay" or Crane delay" entity leaving the Task that models delay
interarrivals was routed into a ConditionalBranch element (See the 6th and
7th sub-models in Figure 6.17). This ConditionalBranch element had a VB
code snippet written within it to determine whether the appropriate resource
was idle or busy at that point in time. If the resource was found to be busy,
the delay entity would be routed out through the top branch triggering the
preemption of the resource, delay for a specied duration, and subsequent
release of the resource before being routed back to the Task element that
schedules the next occurrence of that delay. Otherwise, if the resource was
idle, the delay entity would be routed out of the ConditionalBranch element
and back to the Task element that schedules the next occurrence of the
delay. The following code snippet was written within the ConditionalBranch
element of the Survey delay" sub-model and demonstrates how the check
and entity routing was implemented for that type of delay.
Dim SurveyingResource As Simphony . General . Resource = _
Scenario . GetElement ( Of Simphony . General . Resource ) _
(" Surveying Crew ")

If SurveyingResource . InUse > 0 Then


Return True
Else
Return False
End If
The other delay types aect the TBM machine and were implemented
in two ways. First, the TBM resource was captured by TBM preemption
hence altering other tasks that it might have been engaged in at the time
of the occurrence of the delay. Second, any tasks that utilize the TBM
resource and were being modelled using a continuous approach, were halted
by appropriately setting specic ow rates to a value of zero. For example,
continuous excavation by the TBM were halted by preemption of the TBM
resource and the deactivation of the continuous simulation of that specic
task. Deactivation was achieved through the use of the appropriate switches
i.e., global attributes summarized in Table 6.5. At the start of simulation,
all these switches (i.e., GX(10) through to GX(17)) are initialized to a value
of one within the initialization property of the scenario. When a delay that
aects a continuously simulated task such as TBM excavation is encountered,
the aected resource, the TBM in this case, is preempted. At the same time,
an appropriate switch is turned o, i.e., set to a value of zero, hence, halting
6.6. EXAMPLE: TUNNEL CONSTRUCTION 227

the continuous excavation process. For example, when a delay due to the
TBM electrical system is encountered, the GX(13) attribute is set to zero
using the following formula.
GX (13) = 0.0
Return Nothing

This formula was embedded within the outgoing trace of the Preempt
modelling element labelled Preempt4. Setting the GX(13) switch to zero
de-activates the continuous TBM excavation process by setting the value of
the Flow element labelled TBM Excavation Rate to zero. The preempting
delay entity is routed into a Task element after being granted the resource
that it requires. This delay entity's LX(1) attribute is tagged with the time
that the delay commenced. The following formula was embedded within the
incoming trace property of this Task element to achieve this eect.
LX (1) = TimeNow
Return Nothing

This Task element emulates the time that a given delay persists. This
duration is modelled using an Exponential distribution (see Table 6.4). After
this time elapses, the preempting entity is routed into a Release element
where it releases the resource that it preempted. The following formula was
embedded within the outgoing trace of this Task element so that the duration
of the delay would be stored on the delay entity (i.e., on the LX(2) attribute)
for subsequent collection as a statistic.
LX (2) = TimeNow - LX (1)
Return Nothing

If the delay aects the continuous TBM excavation, a designated switch


for re-activating the continuous excavation would be turned on, i.e., set to
a value of one at the time that the preempted resource is released. For the
delay due to the TBM electrical system example, the following formula is
embedded in the outgoing trace property of the Release modelling element
to achieve this eect.
GX (13) = 1.0
Return Nothing

The delay entity is then routed into a Counter element where it regis-
ters the number of events realized for that delay type. Thereafter, the delay
entity is routed into a Generate modelling element where it is cloned. The
228 CHAPTER 6. CONTINUOUS SIMULATION

original delay entity is routed into a Collect statistic modelling element la-
belled Collect Delay Statistics. The LX(2) property of this original delay
entity is used in the collection of a delay duration observation. The follow-
ing formula was embedded within the value property of this Statistic collect
modelling element. These observations were sent to the Statistic element
labelled Delay Duration Statistics.
Return LX (2)

The cloned delay entity is then routed back to the rst Task element that
models the delay interarrival times and the cycle is repeated once again. This
cycle keeps on going until the simulation is explicitly terminated.

6.6.6 Simulation Model Results


Statistical Results
After the simulation model was built and checked for integrity, it was run
30 times. The model was run multiple times because statistical distributions
were used as inputs. The simulation experiment was also seeded to facilitate
anticipated embellishments and scenario comparisons. The run count and
seed value are set at the scenario.
Simulation results for total project duration are read from the LastTime
property of the Counter modelling element labelled Terminate Simulation
in the model layout presented in Figure 6.13. The production rate values
for the tunnelling process are read from the ProductionRate property of the
Counter modelling element labelled Track Production Rate in the model
layout presented in Figure 6.13.
Two scenarios were run, one without delays, and another with delays.
Delays were turned on/o in the simulation model by setting the quantity
property of the create modelling elements in the delay model shown in Fig-
ure 6.17 to a value of one or zero. Thereafter, a sensitivity analysis was
performed on the model scenario that included delays. Simulation results
i.e., tunnelling production rate and overall project duration for the two sce-
narios are summarized in Tables 6.6 and 6.7. The time unit used in the
simulation model was minutes" so, necessary conversions were performed to
obtain the values presented in these Tables.
Figure 6.18 presents a Cumulative Distribution Function chart for the
total project duration for the Tunnel construction with delays included. This
6.6. EXAMPLE: TUNNEL CONSTRUCTION 229

Table 6.6: SA1A Results (Without Delays)

Parameter Mean Standard Deviation


Production Rate 0.60 m/hr 0.00 m/hr
Total Duration 1553.90 hrs 0.00 hrs

Table 6.7: SA1A Results (With Delays)

Parameter Mean Standard Deviation


Production Rate 0.60 m/hr 0.00 m/hr
Total Duration 1697.03 hrs 133.34 hrs

100 %
Cumulative Probability

80 %

60 %

40 %

20 %

0%
0.8 0.85 0.9 0.95 1 1.05 1.1
Simulation Time (Mins) ·105

Figure 6.18: CDF for SA1A Project Duration


230 CHAPTER 6. CONTINUOUS SIMULATION

chart was retrieved from the last count property of the counter modelling
element labelled Terminate Simulation shown in Figure 6.13. There was
no chart to present for the total project duration from the scenario without
delays because this parameter had a standard deviation of zero.
The time that it takes for delays to persist is collected as a statistic in
the simulation model. Train cycle times are another statistic collected in the
model. At the end of simulation, results of these statistics are retrieved for
their respective statistic modelling elements and presented in Figures 6.19
and 6.20, respectively.
In order to conrm the accuracy with which delays are modelled, a com-
parison is made between the numbers of delays that were realized on the con-
struction site and those obtained in the simulation model. Details of these
are summarized in Table 6.8. Simulated values are obtained from the mean
of the LastCount property of the appropriate Counter modelling element
that tracks the number of delays.
It is evident from Table 6.8 that the majority of delays are modelled
accurately with the exception of TBM delays, TBM Hydraulic systems, and
delays due to surveying. These were poorly estimated and thus did not
occur with the expected frequency based on actual construction data. The
most likely reason for this relates to the small sample size used to formulate
the representative distributions for modelling delay interarrival times and
durations.

Table 6.8: Comparison of Construction and Simulation Delay Counts


Delay Actual Simulation Evaluation
TBM 17 13 Needs Optimization
TBM Hydraulic Systems 21 12 Needs Optimization
TBM Electrical Systems 8 8 Acceptable
TBM Water System 2 4 Acceptable
Cleaning TBM 6 5 Acceptable
Surveying 6 1 Needs Optimization
Weather/Crane 1 1 Acceptable
Rocks 13 11 Acceptable
Voids and PVC As-Built 13 14 Acceptable
Miscellaneous Delays 4 3 Acceptable
6.6. EXAMPLE: TUNNEL CONSTRUCTION 231

50 %
Relative Frequency

40 %

30 %

20 %

10 %

0%
4.147 4.445 4.744 5.042 5.341 5.639 5.938
Train Cycle Time (Min) ·104

Figure 6.19: Train Cycle Time Histogram

40 %
Relative Frequency

30 %

20 %

10 %

0%
127.7 153.4 179.1 204.8 230.5 256.2 281.9
Delay Persistence Duration (min)

Figure 6.20: Unanticipated Delay Duration Histogram


232 CHAPTER 6. CONTINUOUS SIMULATION

Sensitivity Analysis
A sensitivity analysis was carried out to assess the eects that unanticipated
delays have on the total tunnelling project duration. To achieve this, each
unanticipated delay was removed from the simulation model. The delay
was removed by setting the Quantity property of the element that creates
the corresponding delay entity, to zero. In order to quantify the impact of
delays on cost, it was assumed each day is comprised of a 12-hour shift and
$20,000 is spent each day on tunnel construction. Results of this analysis are
summarized in Table 6.9. The maximum possible total project duration of
1872.40 hours (156.03 days) obtained from the simulated scenario in which
all delays were considered, is used as the basis for the computation of values
presented in Table 6.9.

Table 6.9: Cost and Schedule Sensitivity Results


Delay Maximum Duration Cost
Type Duration Dierential Savings
TBM 145.71 days 10.32 days $206,400
TBM Hydraulic Systems 150.53 days 5.50 days $110,000
TBM Electrical Systems 148.70 days 7.33 days $146,600
TBM Water Systems 151.75 days 4.28 days $85,600
Cleaning TBM 150.40 days 5.63 days $112,600
Surveying 157.34 days 0.00 days $0
Weather/Crane 158.13 days 0.00 days $0
Rocks 150.18 days 5.85 days $117,000
Miscellaneous Delays 152.03 days 4.00 days $80,000
Voids and PVC As-Built 145.56 days 10.47 days $209,400

6.6.7 Embellishments to the Base Delay Model


In an attempt to optimize the delays that are seen to generate poor results
in Table 6.8, it is evident that one delay type, i.e., weather/crane delays, re-
quires an increment to its mean inter-arrival time. The other two delay types,
i.e., TBM and TBM hydraulic, require a reverse treatment. A 25% margin is
used in the reductions while 30% is used for increments. These modications
were applied to only the TBM delays and delays due to weather/crane. No
adjustments were applied to TBM hydraulic delays. This entails the rst
model embellishment.
6.7. MODELLING STRATEGIES 233

The second embellishment involves investigating the impact of mitigating


the delay types that indicated the highest cost saving to the tunnel construc-
tion process in Table 6.10, i.e., TBM hydraulic, other and miscellaneous
delays, and delays due to rocks. Mitigation measures involve using margins
of 10%, 25%, and 50% to reduce the time that delays persist and increase
their inter-arrival time. Results obtained are summarized in Table 6.10.
The process improvement strategies reveal a reduction of 1.18 days and
cost saving of $23,600 for a 10% reduction in the severity of the three major
delay types (TBM hydraulic, other and miscellaneous delays, and delays due
to rocks). Reducing the severity of these major delays by half results in a
reduction in project duration of 6.20 days and a cost saving of $124,000.

Table 6.10: Summary of the Impacts of Unanticipated Delay Mitigation


Avoidance Maximum Duration Cost
Amount Duration Dierential Savings
Delays reduced 10% 152.45 days 1.18 days $23,600
Delays reduced 25% 151.43 days 2.21 days $44,200
Delays reduced 50% 147.43 days 6.20 days $124,000

6.7 Modelling Strategies


6.7.1 Continuous Activities
It is often the case in a combined discrete event/continuous model that en-
tities in the discrete event portion of the model need to wait for activities
in the continuous portion to complete. For example, the loading of a truck
might be modelled continuously while the truck's other activities (hauling,
dumping, returning) might be modelled discretely. In this case, when the
truck is being loaded, the entity modelling the truck must be delayed until
the loading process has been completed.
The modelling pattern commonly used to solve this problem is shown in
Figure 6.21. In this example, an entity (representing a truck) arrives at the
Execute element labelled Start Loading. The Execute element is congured
to run the following code:
SetStockValue (" Truck " , 0)
SetFlowRate (" Flow " , 1.2)
Return True
234 CHAPTER 6. CONTINUOUS SIMULATION

which simply resets the value of the Stock element labelled Truck to 0,
and then set the rate of ow into that Stock element to 1.2. The entity
then proceeds to a Valve element labelled Block Truck. This Valve has its
InitialState property set to Closed and its AutoClose property set to 1, so
when the entity arrives it is forced to wait.
Now because of the call made to the SetFlowRate method, ow begins into
the Truck Stock element. As the Stock element lls, it is being monitored
by a Watch element while the entity is blocked by the valve. This element is
congured to trigger creation of an entity when the truck is lled to capacity;
i.e., its Direction property is set to Positive, its Threshold property is set to
slightly less than the capacity of the truck, and its Tolerance property is set
to the dierence between the truck's capacity and the Threshold property.

Figure 6.21: Continuous Activity Modelling Pattern


Chapter 7
Statistical Aspects of Simulation

The real world is not static or deterministic. Many events are unpredictable,
and many processes appear to occur in a random manner. For example,
the cycle times of the trucks or shovels in an earthmoving operation vary
from cycle to cycle. They are rarely the same. Likewise, the service time
for loading a truck varies from load to load. In the real world there are
many variables that dictate the outcomes of such operations, which make
them appear to be random. For example, the truck cycle time may vary
because of the operator, the road conditions, other trac, change in weather,
unexpected mechanical problems, and so forth. While it would be great to
have the simulation model include as many factors that impact the cycle time
as possible, and as much detail of the process as possible in order to provide
more accurate estimates of each service time, or random event, in general,
it would be ill-advised to try to capture all these variables and include them
in the model. First, it may not be possible to collect the required input for
such variables to feed into the model, and second, the model would be very
large, expensive to build and dicult to manage.
The variability in the real world can be accounted for in a simulation
model, however. We use the concept of Monte Carlo Simulation to achieve
this. The Monte Carlo Method is a process that makes use of random num-
bers and the principles of statistical sampling to model random processes.

235
236 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

7.1 Background to the Monte Carlo Method


The Monte Carlo Methods have their roots in the 19th century with the use
of probability and statistics to model games of chance (see Laplace (1812),
for example). Modern Monte Carlo algorithms were popularized by the Los
Alamos Labs in the 1950's, where they were working on simulations of neu-
tronics, hydrodynamics, thermonuclear detonations, etc. The works were
those of Ferri, Ulam, von Neuman, Metropolois and others. During that
period there were no digital computers, so simulations were done by massive
hand calculators. Those same calculators gave rise to early digital computers.
The Monte Carlo method is attributed to Ulam who was mainly interested in
using sampling theory to solve some of the complicated problems presented
in the Lab. The name given to the method Monte Carlo" is a reference to
games at the Monte Carlo Casinos where Ulam's uncle used to visit.
Monte Carlo methods are useful to solve dicult problems through sta-
tistical sampling (a numeric approach). For example, suppose we want to
calculate the integral:
Z b
f (x) dx.
a

We can estimate this by simply taking the average of f and multiplying it


by the value b − a. To use Monte Carlo Simulation to estimate this integral,
simply take a random value x, calculate f (x) and repeat many times. Take
the average of those calculations and then multiply by b−a to get the estimate
of this integral without having to integrate!
Let's illustrate through a simple example. Suppose that

f (x) = x + 1,

and that we wish to compute the denite integral:


Z 32
f (x) dx.
20
7.1. BACKGROUND TO THE MONTE CARLO METHOD 237

It is relatively straightforward to calculate this using the Fundamental The-


orem of Calculus as follows:

Z 32 Z 32
f (x) dx = x + 1 dx
20 20
 32
2p
= (x + 1)3
3 20

≈ 126.380 − 64.156
= 62.224.

Now let's try to solve the integral using Monte Carlo Simulation. We
construct the Simphony model shown in Figure 7.1. The rst thing to note
about this model is that the RunCount property of the scenario has been set
to 10,000 (see Figure 7.2). This means that rather than executing the model
just once, Simphony will execute it 10,000 times. The idea is to evaluate f
at a randomly selected point between 20 and 32 each time the model is run.

Figure 7.1: Model to Calculate a Denite Integral

Moving on to the model itself, we see that it consists of four elements.


The rst is a non-intrinsic Statistic that will accumulate the values of f that
we calculate. The second is a Create element congured to create a single
entity at time zero. Once created, the entity is routed to the third element
(a StatisticCollect) that will evaluate f at a randomly selected point using
the following formula:
Dim X As Double = SampleUniform (20 , 32)
Return System . Math . Sqrt ( X + 1)

The rst line in this formula generates a random deviate from a uniform
distribution with a low of 20 and a high of 32, which is then stored in a
local variable named X. The second line in the formula evaluates f at this
point and returns the result, which is then added as an observation to the
statistic. Upon leaving the StatisticCollect, the entity is routed to a nal
Destroy element.
238 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Figure 7.2: Scenario Properties

After the model is run, the mean value reported by the statistic is 5.187,
so the value of the denite integral should be approximately:

5.187 × (32 − 20) = 62.244,

which agrees closely with the value calculated using the Fundamental Theo-
rem of Calculus.
Now suppose that the function f is not so easy to integrate, say:
Z 32 r
x+1
f (x) = dx.
20 ex/20

In this case, the Monte Carlo approaches become more attractive and even
necessary. To modify our model to perform this calculation, we need only
change the formula of the StatisticCollect element to:
Dim X As Double = SampleUniform (20 , 32)
Return System . Math . Sqrt (( X + 1) / System . Math . Exp ( X / 20))

This time, when the model is run, the mean value reported by the Statistic
is 2.701, so the value of the denite integral should be approximately:

2.701 × (32 − 20) = 32.412.

Obviously, Monte Carlo Simulation can be a very powerful approach for


calculating denite integrals where it is very dicult (if not impossible) to
arrive at a solution using analytic methods.
Let us now formalize the Monte Carlo Method: the simplest form is to
describe the approach we used in the example above using the following
algorithm:

1. Model the problem at hand in a manner that lends itself to statistical


sampling theory:
7.2. MONTE CARLO SIMULATION IN CONSTRUCTION 239

(a) Formulate the problem using tractable analytical methods (e.g.,


formulas, algorithms, logic, etc.).
(b) Represent the formulation of the problem as a computer model
(e.g., using a spreadsheet, a Simphony model, or a computer pro-
gram).
(c) Replace each variable that is thought to be uncertain or stochastic
with a statistical distribution.
(d) The model is now composed of formulations where the basic vari-
ables are statistical distributions.
2. Conduct a statistical sampling experiment using your model by repeat-
ing the following process n times (where n is a large number):
(a) Generate random numbers that are uniformly distributed on the
unit interval [0, 1], that are reproducible (i.e., the simulationist
should be able to replicate the same procedure when desiredwe
call those pseudo random number since they are not really ran-
dom). Note that when using a computer simulation toolkit this is
done automatically for you.
(b) For each variable that is represented with a statistical distribution
in the model, transform the random number into a value from that
statistical distribution. Again, this is automated in simulation
tools.
(c) Complete all calculations in the formulation (i.e., compute the
output variables desired of the model).
(d) Store the results for each output of this iteration.
(e) Check if n iterations have been completed: if yes then exit, oth-
erwise go to step (a).
3. Statistically analyze the sample of collected output values and develop
point and interval estimates for all relevant output variables.

7.2 Monte Carlo Simulation in Construction


In construction management, many applications of Monte Carlo simulation
arise. This is because construction operations are subject to a wide vari-
ety of uctuations and interruptions. Varying weather conditions, learning
240 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

development on repetitive operations, equipment breakdowns, management


interference, and others are all external factors that impact the production
processes. As a result of such factors, the behaviour of a construction pro-
cess becomes subject to some random variations. It rarely happens on a
construction site that the same task consumes the same duration on suc-
cessive occurrences. A truck traveling from one place to another will take
dierent amounts of time on each trip. This fact necessitates modelling a
construction process as a random (stochastic) process that varies and be-
haves based upon some pre-specied laws of probability. We might, through
analysis, discover that the travel time of our truck varies between 25 and 60
minutes and is beta distributed with shape parameters 3 and 4, as shown in
Figure 7.3.

6%
Relative Frequency

4%

2%

0%
25 30 35 40 45 50 55 60
Duration (Minutes)

Figure 7.3: Truck Travel Time Modelled as a Beta Distribution

Arrival processes are another common source of randomness in construc-


tion operations. At a fabrication shop for example, the time between orders
being placed (the interarrival time ) will not be constant, but random, as
shown in Figure 7.4.
In fact any parameter in a General Purpose model can be converted to
a random variable, be it the number of resources available, the amount of
time required for service, the path that an entity follows, or the time ran-
dom events appear. The General Purpose model shown in Figure 7.5 below
7.3. RANGE ESTIMATING 241

Interarrival
Time
0 5 8 19 22 27 33 36 Time (days)

Arrivals

Figure 7.4: Placement of Orders at a Fabrication Shop

demonstrates where typical parameters can be randomly modelled in Sim-


phony.

Figure 7.5: Model of an Equipment Repair Shop

7.3 Range Estimating


Another use of Monte Carlo simulation in construction is range estimating,
in which distributions are used to replace the quantities and unit rates in a
cost estimate. In this simulation, we break down the estimate into line items
each comprised of a unit rate and a quantity. We replace the unit rate with a
statistical distribution that best models that unit rate and the quantity with
a statistical distribution that best models that quantity. We simply multiply
those two items for each line item and sum the total to get the estimate. For
example, assume that the cost per linear meter of excavation and installation
of a utility pipeline is thought to be between $2,500/m and $2,900/m with
242 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

the most likely cost being $2,600/m. We can assume that unit cost is best
modelled with a triangular distribution with end points 2,500 and 2,900 and
mode of 2,600. The total length of the pipeline is 1,200 meters within 5
meters accuracy. We can model this with a uniform distribution with end
points of 1,195 and 1,205. During the simulation, we simply sample values
from the triangular distributions of unit cost and the uniform distribution of
length, multiply the two random samples to estimate the cost for that line
item for the given iteration. We then sum up all line item costs to get the
total for that iteration. When we collect all observations for all iterations we
have a distribution of the total project estimate.

7.3.1 Shaft Construction Example


To illustrate the Monte Carlo simulation of a cost estimate, consider the esti-
mate for a shaft intended to support construction of a tunnel. The estimator
is not certain of the values he/she is using for unit rates or production, thus,
cannot ascertain the exact numbers to use.
The work items for the shaft are shown in Table 7.1 below. For most
items, an optimistic cost, a most likely cost, and a pessimistic cost are in-
dicated. In our Monte Carlo simulation, we will model the cost of these
items with a triangular probability distribution. The costs of the remaining
items are known with certainty, and in these cases only a most likely cost is
provided. In our simulation, these costs are constant and do not need to be
modelled with a probability distribution.
A Simphony General Template model for this estimate is shown in Fig-
ure 7.6. In this model, a single entity is generated by a Create element, which
then passes through eight CostCollect elements, each of which represent one
of the cost items. The CostCollect elements are congured with a quantity of
1, and the unit cost is either the appropriate constant or a formula that sam-
ples from the appropriate triangular distribution. For example, the formula
for the CostCollect element labelled Mobilization" is:
Return CDec ( SampleTriangular (40000 , 100000 , 70000))

The scenario is congured to run 100 times. The cost report produced after
the model has been simulated is shown in Figure 7.7.
7.3. RANGE ESTIMATING 243

Table 7.1: Shaft Work Items

# Description Optimistic Most Likely Pessimistic


1 Mobilization $40,000 $70,000 $100,000
2 Power installation  $89,000 
3 Excavate shaft $97,600 $122,000 $146,400
4 Excavate undercut $200,000 $269,000 $350,000
5 Excavate tail tunnel $100,000 $123,000 $150,000
6 Pour undercut  $80,000 
7 Pour tail tunnel  $39,000 
8 Pour shaft $100,000 $120,000 $150,000

Figure 7.6: Range Estimating Model for Working Shaft Costs


244 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Figure 7.7: Working Shaft Cost Report

7.3.2 Tunnel Construction Example


Monte Carlo methods can also be used to model the schedule of a project.
To illustrate, we'll look at the schedule for the tunnel project in the previous
example, except this time we'll consider the entire operation rather than just
the shaft. The work items for the project are shown in Table 7.2 below.
Note that excavation of the tunnel has been divided into two halves. This is
because the rst half (866 m) of the tunnel will be excavated using a single
shift per day and the second half (756 m) will be excavated using two shifts
per day.
For most items in this table, an optimistic duration, a most likely dura-
tion, and a pessimistic duration are indicated. In our Monte Carlo simula-
tion, we will model the duration of these items with a triangular probability
distribution. For other items, only an optimistic duration and a pessimistic
duration are given. In these cases, the duration will be modelled with a uni-
form distribution. Finally, the last item (ordering of segments) has a known
duration of 28 days and will be constant in our model.
Unlike a cost estimate, when we model a schedule we have to take into
account any dependencies between the activities. The dependencies of the
work items in our project are illustrated in Figure 7.8 as a CPM network
(activities-on-arrows).
7.3. RANGE ESTIMATING 245

Table 7.2: Tunnel Work Items

# Description Optimistic Most Likely Pessimistic


1 Excavate shaft 15 days 18 days 30 days
2 Undercut/tail tunnel 15 days 30 days 60 days
3 Install TBM 5 days  10 days
4 Excavate 866 m 144 days 173 days 216 days
5 Excavate 756 m 84 days 95 days 108 days
6 Extract TBM 5 days  10 days
7 Line undercut 13 days 15 days 17 days
8 Excavate sump 25 days  30 days
9 Order segments  28 days 

2
1 3
4 5 6 7 8

Figure 7.8: CPM Network Showing Task Dependencies

Figure 7.9: Range Estimating Model for Tunnel Schedule


246 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

A Simphony General Template model for this schedule is shown in Fig-


ure 7.9. In this model, a single entity is generated by a Create element,
which then passes through Task elements representing the work items. The
duration of each Task element is set to the distribution specied in Table 7.2.
Dependencies between Tasks are modelled using paired Generate and Con-
solidate elements. The scenario is congured to run 1000 times and its time
unit is set to days. After execution, the scenario's termination statistic can be
examined to see the overall duration of the project as shown in Figure 7.10.
As can be seen from this chart, there is a 90% probability that the project
will nish at or before the 411 day mark.

1.0
Cumulative Probability

0.8 90th percentile

0.6

0.4

0.2

0.0
340 360 380 400 411 420 440
Project Duration (days)

Figure 7.10: Tunnel Project Duration

7.4 Generating Random Numbers


Computer simulation of a stochastic process is based upon the generation
of random numbers. Consider for example, a truck hauling earth from one
location to another. Suppose that on a particular day, we observed the
truck performing this operation and saw that it takes the truck anywhere
between 10 and 20 minutes to make the trip. In order to model this in a
simulation, we have to have some means of generating a random duration
for each trip between the specied limits in an ecient way. The basis
7.4. GENERATING RANDOM NUMBERS 247

for sampling durations between the lower limit of 10 and upper limit of 20
lies in generating a uniform random number on the range [0, 1] and then
transforming that number into the appropriate range and/or model of the
collected data. The transformation of random numbers into an appropriate
variate will be covered in the next section. In this section we discuss the
generation of uniform random numbers on the range [0, 1].
A true random number as dened by mathematicians is very dicult to
generate. In today's age of digital computers, simulators settle for a pseudo
random number. Such a number possesses similar attributes as a true random
number for functionality purposes, and from here on we will use the phrase
random number to mean a pseudo random number.
To get a random number on the range of [0, 1] one could throw dice, look
in a telephone book, draw numbered balls, use tables like the one produced
by the RAND corporation, or use numerical means. Numerical means are
adaptable for computer use and with some care they can be used to generate
numbers which appear to be random for all practical purposes. A recursive
algorithm that is used to generate random numbers is referred to as a random
number generator (RNG). An RNG should produce fairly uniform numbers
on the range [0, 1] that appear to be independently sampled, dense enough
on the interval [0, 1], and reproducible. In addition, the algorithm should be
ecient and portable to use in simulation programs.
Numerical techniques for random number generation go back to the early
1940's with the mid square method introduced by von Newman and Metropo-
lis. Lehmer (1951) introduced a method referred to as the Linear Congru-
ential Scheme (LCS). Today the most widely used version of LCS is the
Multiplicative LCS, which could be dened by the recursive equation:

Zn = a × Zn−1 mod m,

where Z0 is a user dened starting integer value (referred to as a seed ), m is


the modulus usually dened to be a large integer value (231 − 1 for example),
and a is a multiplier usually set to the value 75 × 16807 Lewis and Miller
(1969). The random number Rn on the nth iteration is then:

Zn
Rn = .
m
To generate random numbers one usually species a seed number Z0 as
a starting value. The value of Z1 is then be computed resulting in R1 and
248 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

so on. It is important to note, however, that the values of a and m must


be chosen with the utmost care, otherwise the random numbers will start to
regenerate after a certain number of iterations.
To illustrate the Multiplicative LCS, consider the following example (note
that for illustration purposes we use small values of a and m; the values
previously mentioned would be more applicable for a computer program):

a = 5 m = 7 Z0 = 9.

In this case, the recursive equation is:

Zn = 5 × Zn−1 mod 7.

So that:

Z1 = 5 × Z0 mod 7 = 5 × 9 mod 7 = 45 mod 7 = 3,

and the rst generated random number is 3÷7 ≈ 0.4285714. Table 7.3 below
shows the results obtained by continuing the calculations.

Table 7.3: Multiplicative LCS

n Zn Rn
0 9 
1 3 0.4285714
2 1 0.1428571
3 5 0.7142857
4 4 0.5714286
5 6 0.8571429
6 2 0.2857143
7 3 0.4285714

Note that on the 7th iteration we obtained the same value of Zn as on


the 1st iteration. This is a result of using a modulus equal to 7, and is
why it is recommended to use a large number like 231 − 1 for m. Also note
that the numbers generated are fairly uniform on the range [0, 1], appear
independent, and if we had used a larger value of m, we would have obtained
a denser population (in this example we were able to create only 6 random
numbers before starting to regenerate).
7.5. GENERATING RANDOM DEVIATES 249

7.5 Generating Random Deviates


In the previous section we introduced the basics of generating uniform ran-
dom numbers on the unit interval [0, 1]. We also mentioned that most random
deviates are basically transformations of these generated random numbers.
There are a number of methods by which one can transform these num-
bers into some desired number with a particular distribution. This section
introduces some of the more common techniques.

7.5.1 Denitions
The cumulative distribution function (CDF) of a random variable X is de-
ned by:
FX (x) = Pr{X ≤ x} x ∈ R.

The function FX completely denes the underlying probability measure gov-


erning the behaviour of X .
A probability density function (PDF) of the random variable X is any
non-negative function fX such that:
Z x
FX (x) = fX (t) dt x ∈ R.
−∞

The inverse of the cumulative distribution function FX−1 (called the quan-
tile function ), is given by:

FX−1 (y) = min{x ∈ R : FX (x) ≥ y} y ∈ [0, 1].

Sometimes, if the distribution of X is a well-known distribution with


parameters θ1 , . . . , θq , we will write:

FX (x; θ1 , . . . , θq ) and fX (x; θ1 , . . . , θq ),

instead of FX (x) and fX (x), respectively, when we wish to be explicit about


the parameters being used.
250 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

7.5.2 The Inverse Transform Method


The most basic and most reliable method for generating random deviates
is the inverse transform method. This method is superior to other methods
in that there is a one-to-one correspondence between the random number
used and the deviate generated. This becomes crucial when debugging a
simulation model or trying to replicate the same experiment for another
purpose. This method is normally preferred to others when the quantile
function is known and is easy to numerically compute. It works as follows:

1. Generate a random number y on the unit interval [0, 1] (as discussed


in the previous section).

2. Set x = FX−1 (y).

3. Deliver x.

Graphically, the method works by generating a random number y on the


y -axis, tracing across to the cumulative distribution function, and then down
to the x-axis to obtain the value x. The process is illustrated in Figure 7.11.

FX
0
x

Figure 7.11: The Inverse Transform Method


7.5. GENERATING RANDOM DEVIATES 251

Generating Uniform Deviates


The probability density function of a random variable X that has a uniform
density over the interval [a, b] is given by:

 1

if x ∈ [a, b],
fX (x) = b − a
otherwise.

0
The corresponding cumulative distribution function can be explicitly de-
termined by integrating the probability density function:

if x < a,


 0
Z x 
x − a
FX (x) = fX (t) dt = if x ∈ [a, b],
−∞ 
 b−a

1 if x > b.

To use the inverse transform method, set FX (x) = y and solve for x as
follows:
x−a
= y =⇒ x = y(b − a) + a.
b−a
Thus, to generate a uniform random deviate on the interval [a, b] we can
use the following process:

1. Generate a random number y on the unit interval [0, 1].

2. Set x = y(b − a) + a.

3. Deliver x.

Generating Triangular Deviates


The probability density function of a random variable X that is triangularly
distributed with minimum a, maximum b, and mode c is given by:

 2(x − a)

if x ∈ [a, c),
(b − a)(c − a)





fX (x) = 2(b − x)
if x ∈ [c, b],
(b − a)(b − c)





otherwise.

0
252 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

The corresponding cumulative distribution function can be explicitly de-


termined by integrating the probability density function:
if x < a,


 0


(x − a)2

if x ∈ [a, c),


Z x 
 (b − a)(c − a)

FX (x) = fX (t) dt =
−∞ (b − x)2
if x ∈ [c, b],

1−


(b − a)(b − c)





if b < x.

1

To use the inverse transform method, set FX (x) = y and solve for x on
the intervals [a, c) and [c, b]:
p
if 0 ≤ y < (c − a)/(b − a),
(
a + y(b − a)(c − a)
x= p
b − (1 − y)(b − a)(b − c) if (c − a)/(b − a) ≤ y ≤ 1.
Thus, to generate a triangular random deviate we can use the following
process:
1. Generate a random number y on the unit interval [0, 1].
2. If y < (c − a)/(b , set y(b − a)(c − a); otherwise,
p
− a) x = a +
set x = b − (1 − y)(b − a)(c − a).
p

3. Deliver x.

Generating Exponential Deviates


The probability density function of a random variable X that is exponentially
distributed with a mean of µ > 0 is given by:
1 x
 e− µ if x ≥ 0,
fX (x) = µ
otherwise.

0
The corresponding cumulative distribution function can be explicitly de-
termined by integrating the probability density function:
x
1 − e− µ if x ≥ 0,
Z x (
FX (x) = fX (t) dt =
−∞ 0 otherwise.
7.5. GENERATING RANDOM DEVIATES 253

To use the inverse transform method, set FX (x) = y and solve for x as
follows:
x = −µ ln(1 − y).
Now if y is a uniform random number on the interval [0, 1], then 1 − y
must be also. Thus when generating random deviates, we can replace 1 − y
in the above equation with y and generate an exponential random deviate
using the following process:

1. Generate a random number y on the unit interval [0, 1].

2. Set x = −µ ln(y).

3. Deliver x.

7.5.3 The Acceptance/Rejection Method


In some cases, the PDF of a distribution might exist, but the correspond-
ing CDF cannot be expressed analytically. Examples include the well-known
beta distribution, normal distribution, and others. In this case, the inverse
transform method cannot be used, so another technique, such as the accep-
tance/rejection method, the composition method, or an analytical approxi-
mation of the inverse CDF, must be used. Herein we introduce the accep-
tance/rejection method (as described by Bratley, Fox, and Schrage (1983)
after von Neumann (1951)) for its simplicity and wide use in simulation
packages.
Given a random variable X with probability density function fX dened
on the interval [a, b], let:

c = max{fX (x) : x ∈ [a, b]}.

So that we have:
0 ≤ fX (x) ≤ c x ∈ [a, b].
The acceptance/rejection method works as follows:

1. Generate a uniform random variate x on the interval [a, b].

2. Generate a uniform random variate y on the interval [0, c].

3. If y ≤ fX (x), then deliver x; otherwise return to step 1.


254 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

This is a trial and error method (see Figure 7.12 for a graphical represen-
tation). Basically, we are generating a random point on the xy -plane where
the PDF is plotted. If the point falls on or below the PDF curve then it
would be of the same distribution as fX , if not, we try again by generating
another point.
In Figure 7.12, the rst point generated is (x1 , y1 ). Since y1 ≤ fX (x1 ),
this point is accepted and x1 is delivered. The next time a random deviate
is called for, the point generated is (x2 , y2 ). Since y2 > fX (x2 ), this point is
rejected and a new point (x3 , y3 ) is generated, but once again y3 > fX (x3 ),
so this point is also rejected and a new point (x4 , y4 ) is generated. This time
y4 ≤ fX (x4 ), so the point is accepted and x4 is delivered.

Reject
c
y3 (x3 , y3 )
y2 (x2 , y2 )
y1 (x1 , y1 )

Accept
y4 (x4 , y4 )
fX
0
a x3 x1 x4 x2 b

Figure 7.12: The Acceptance/Rejection Method

Generating Beta Deviates


As an application of the acceptance/rejection method, we demonstrate how
to generate random deviates from a random variable X that is beta dis-
tributed with shape parameters α = 2 and β = 5 and a range of 0 to 1.
Our rst step is to evaluate the beta function B(α, β), which is straight-
forward since both α and β have integral values allowing us to express the
gamma function (denoted by Γ) as a factorial:

Γ(α)Γ(β) (α − 1)!(β − 1)! 1! 4! 1


B(α, β) = = = = .
Γ(α + β) (α + β − 1)! 6! 30
7.5. GENERATING RANDOM DEVIATES 255

The probability density function of X is then:


1
fX (x) = xα−1 (1 − x)β−1 = 30x(1 − x)4 .
B(α, β)
Next, we can use elementary calculus to determine that on the interval
[0, 1], fX is maximal when x = 0.2. So:
c = fX (0.2) = 30 × 0.2 × (1 − 0.2)4 = 2.4576.
We can now generate random deviates from X using the following procedure:
1. Generate a uniform random variate x on the interval [0, 1].
2. Generate a uniform random variate y on the interval [0, 2.4576].
3. If y ≤ 30x(1 − x)4 , then deliver x; otherwise return to step 1.
Note that this procedure is not very ecient: the region of the xy -plane
in which we are randomly generating points has an area of 2.4576, while the
area under the curve fX is 1. Thus, only about 40% of the points we generate
will be accepted. See Cheng (1978) for a more eective way of generating
beta deviates using the acceptance/rejection method.

7.5.4 The Box-Muller Method


The last technique we present deals exclusively with generating normal devi-
ates. Since the CDF of the normal distribution cannot be expressed analyti-
cally, one has to resort to a technique other than inverse transform. The one
we introduce here (Box & Muller, 1958) does not t under any of the accep-
tance/rejection, composition, or approximation of CDF methods; however,
it is simple, widely used, and accurate. It can be summarized as follows:
To generate deviates from a normal distribution with a mean of µ and a
variance of σ 2 , rst generate two uniform random variates x1 and x2 on the
interval [0, 1], and then set:
p
z1 = µ + σ cos(2πx1 ) −2 ln(x2 ),
and p
z2 = µ + σ sin(2πx1 ) −2 ln(x2 ).
The values z1 and z2 will be two independent normal deviates from the
specied distribution. In the context of programming, one would call the
routine and generate two normal deviates at a time. The rst would be
returned from the rst call, and the second saved for the second call.
256 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

7.5.5 Other Techniques


There are other techniques for generating random deviates as previously men-
tioned, though most are beyond the scope of this text. We refer the interested
reader to Fishman (1977), Bratley et al. (1983), Law and Kelton (1991), and
other books. Basically, each simulation package might use any of a num-
ber of techniques, the factors to look into would be the ability to replicate
the same random generated streams for ease of debugging and other reasons.
This is best accomplished via the inverse transform or by dedicating separate
streams for each sampling distribution such as the method implemented in
MicroCYCLONE.

7.6 Input Modelling for Simulation Studies


We have seen in the previous section how the computer generates random
deviates for use in a computer simulation of a given construction model. In
order for the procedures described above to work successfully, the simulation-
ist must specify for the simulation program two basic things regarding gen-
eration of random deviates: rst, the type of distribution desired for random
deviate generation; and second, the parameters of the selected distribution.

7.6.1 Empirical Distributions


If data sampled from a random process is available, then one of the easiest
ways to generate random deviates modelling that process is to use an em-
pirical distribution. Suppose for example, that x1 , . . . , xn are observations
sampled from our random process. We dene the cumulative distribution
function of our distribution to be:
number of xi ≤ x
F̂n (x) = .
n
Note that we may assume (without loss of generality) that our observations
have been sorted, i.e., that:

x1 ≤ x2 ≤ · · · ≤ xn−1 ≤ xn .

In which case:
i
F̂n (xi ) = i = 1, . . . , n.
n
7.6. INPUT MODELLING FOR SIMULATION STUDIES 257

Figure 7.13 illustrates the cumulative distribution function of a typical em-


pirical distribution.

1.0
Cumulative Probability

0.8

0.6

0.4

0.2
F̂n
0.0
x1 x2 x3 x4 x5 x6 x7 x 8 x9 x10
Observations

Figure 7.13: An Empirical Distribution

To use an empirical distribution, the simulationist must write the neces-


sary code to sample from the empirical CDF, thus, the technique is inconve-
nient, particularly when trying to manipulate large sets of data and multiple
numbers of data sets at the same time.
A classical way of overcoming the disadvantage of this technique is by
searching for a standard statistical distribution that has a well-dened func-
tional form (see for example, some of the distributions given in the previous
section) and that closely models the set of observations available. Such dis-
tributions are usually supported by simulation software. The problem is re-
duced to nding a distribution, determining the parameters that fully dene
the selected distribution, and testing for how well the selected distribution
tracks the empirical distribution of the sample data.

7.6.2 Selecting a Distribution


Given that we have collected a set of observations from a construction site,
the rst decision to be made would be which of the statistical distributions
to use as a model (e.g., normal, beta, lognormal, etc.). In most cases, it
is a matter of the physical properties of the random number desired and
258 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

the experience of the simulationist. Modelling durations of work tasks, for


example, requires a bounded distribution since physically, any real task will
require some amount of time to be accomplished; the time cannot be zero or
negative and is usually bounded from the positive side as well depending on
the type of the task. This automatically excludes unbounded distributions
such as normal and exponential from being good candidate models (this does
not include the truncated versions of these distributions). If, for example,
we were to model the duration of a construction activity with a normal
distribution, then there would always be a small possibility that the duration
sampled will be negative (physically impossible for time) or extremely large
(highly unlikely for a construction activity).
The most basic way of selecting a statistical distribution as a model for
a set of data is to relate the sample obtained to the shape (or shapes for
families of distributions) of the theoretical distribution. A histogram formed
from the sample is analogous to the PDF of the theoretical distribution since
both reect the weight each of the sample intervals (or sample points) should
receive in terms of their probability of occurrence (hence sampling). The
idea would be to relate the shape of the histogram of the sample to the
shape of a known distribution. For example, if the histogram were skewed to
the left, a log normal or beta distribution would be a good candidate since
both can attain such shapes. The problem with this technique, however,
is the construction of the histogram itself. In the absence of a standard
technique to do this, one can easily distort the real shape of the histogram
by inappropriately specifying the width of the cells and their locations.
A good way to overcome this problem is by using a standard procedure
for specifying histograms like Sturges' rule (Sturges, 1926) which can be
summarized as follows: Given n observations x1 , . . . , xn to be summarized in
a histogram, one would take:

Number of cells = d1 + 3.3 log10 (n)e,


max{xi } − min{xi }
Width of a cell = ,
Number of cells
Low value of rst cell = min{xi }.

This guideline will usually reveal the general layout of the data provided the
number of cells is in the range of 5 to 15. The most frequently encountered
problem with constructing a histogram is the tendency to specify more cells
7.6. INPUT MODELLING FOR SIMULATION STUDIES 259

than the data can support. Sturges' rule accounts for this in a heuristic
manner.
To illustrate the construction of a histogram for a sample data set, we
present the data shown below of the time (in minutes) it takes to dump
concrete on a oor during a concrete pouring operation:

0.367 0.422 0.379 0.615 0.206 0.659


0.413 0.912 0.479 0.769 0.161 0.359
0.844 0.824 0.382 0.906 0.412 0.461
0.453 0.479 0.691 0.282 0.159 0.688
0.362 0.499 0.326 0.551 1.300 0.953
0.162 0.450 0.852 0.577 0.207 0.495
0.528 1.282 0.589 0.315 0.466 0.309
0.982 0.393 0.420 0.404 0.716

To construct a histogram according to Sturges' rule, we need to know the


total number of observations (47 in this case) and the end points of the data
(0.159 and 1.300 in this case). Our calculations are as follows:

Number of cells = d1 + 3.3 log10 (47)e = 7,


1.300 − 0.159
Width of a cell = = 0.163,
7
Low value of rst cell = 0.159.

The parameters for the histogram are now specied. The next step is to go
over the set of observations and count the number that fall into each of the 7
cells. The results of this step are shown in Table 7.4. The histogram is then
constructed as shown in Figure 7.14.
Having constructed an appropriate histogram, the next step in selecting a
distribution as an input model is to relate the shape of the histogram to that
of a known distribution. A somewhat bell shaped" histogram suggests the
use of a normal distribution (truncated for duration models). The histogram
in Figure 7.14 suggests a beta, gamma, or lognormal distribution. After
some experience with input modelling, one should be easily able to relate the
shapes of a histogram to those of the theoretical PDFs.
An even better approach for selecting distributions is to start with a fam-
ily" of distributions. Families like the beta family, the Pearson family, and
the Johnson family (Johnson, 1949) amongst others, give the simulationist
260 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Table 7.4: Concrete Dumping Durations (Histogram Construction)

Low High Frequency Relative


Cell Value Value Frequency
1 0.159 0.322 8 17.0%
2 0.322 0.485 18 38.3%
3 0.485 0.648 7 14.9%
4 0.648 0.811 5 10.6%
5 0.811 0.974 6 12.8%
6 0.974 1.137 1 2.1%
7 1.137 1.300 2 4.3%

40 %
Relative Frequency

30 %

20 %

10 %

0%
0.159 0.322 0.485 0.648 0.811 0.974 1.137 1.300
Duration (min)

Figure 7.14: Concrete Dumping Durations (Completed Histogram)


7.6. INPUT MODELLING FOR SIMULATION STUDIES 261

the exibility of attaining a wide variety of shapes with the same distribution
model. Figure 7.15 shows samples of PDFs from the beta family attained by
varying the shape parameters α and β of the beta distribution.

α = 0.5, β = 0.5
2.5 α = 5.0, β = 1.0
α = 1.0, β = 3.0
2.0 α = 2.0, β = 2.0
α = 2.0, β = 5.0
1.5

1.0

0.5

0.0
0.0 0.2 0.4 0.6 0.8 1.0

Figure 7.15: PDFs from the Beta Family of Distributions

Once we have decided on the type of distribution to use as an input model,


we need to determine the distribution parameters that most closely match
the data. We discuss three dierent methods of doing this.

7.6.3 The Method of Moments


Suppose we were to generate 20 samples from a normal distribution with
a mean of 20 and a variance of 16. We might very well end up with the
following set of numbers:
19.58592 21.22451 10.46682 15.17291
12.57538 24.64593 22.51534 15.58736
17.10298 22.23960 24.83332 21.09920
24.59406 16.76235 18.80357 13.17760
18.35137 20.99394 23.89557 22.27927
Since the theoretical mean of the distribution is 20, and the theoretical vari-
ance is 16, we expect the mean and variance of our sampled numbers to be
similar. In fact, the mean of our numbers is 19.29535, and the variance is
17.50245, both of which are reasonably close to the theoretical values. Now
262 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

if we were to sample another 20 numbers, we would not expect the new set to
have the exact same mean and variance; nevertheless, we would expect the
new mean and variance to still be reasonably close to the theoretical values.
It is of course possible (though highly unlikely) that the mean and variance of
our new data are quite dierent than the theoretical values; this is a random
process after all. However, we can be fairly condent that most of the time
the statistical estimators for our sample will be close to the theoretical ones.
In fact, if we were to generate larger and larger samples we would become
more and more condent of this, until, at the limit (sample size tending to
innity), the estimators for the sample equal the theoretical ones.
Now suppose that we are aware that the above data was sampled from a
normal distribution, but that we do not know the distribution's parameters.
In order to determine them, we might simply assume that the theoretical
mean and variance of the distribution are the same as the mean and variance
of the data, i.e., that the data was sampled from a normal distribution with
a mean of 19.29535 and a variance of 17.50245. When we do this, we are
using the method of moments to nd the parameters of the distribution.
In general, if x1 , . . . , xn are random deviates sampled from a random
variable X , then the j th sample moment of the deviates is dened to be:

n
1X j
m0j = x.
n i=1 i

In particular, X̄ = m01 is the mean of the deviates, and (as is easily veried)
1 is the (population) variance of the deviates.
S 2 = m02 − m02
Next, suppose we believe that X is closely modelled by a certain probabil-
ity distribution with parameters θ1 , . . . , θq and probability density function
fX . The j th moment of the distribution is dened to be:

Z ∞
µ0j (θ1 , . . . , θq ) = xj fX (x; θ1 , . . . , θq ) dx.
−∞
7.6. INPUT MODELLING FOR SIMULATION STUDIES 263

To establish the values of the parameters θ1 , . . . , θq using the method of


moments, we setup q equations in q unknowns:

m01 = µ01 (θ1 , . . . , θq ),


m02 = µ02 (θ1 , . . . , θq ),
..
.
m0q = µ0q (θ1 , . . . , θq ),

and solve for θ1 , . . . , θq in terms of m01 , . . . , m0q . In this way, we have expressed
θ1 , . . . , θq in terms that can be easily calculated from our sample data.

Fitting an Exponential Distribution


An exponential distribution has a single parameter µ > 0 (the mean of the
distribution), and probability density function of:
1 x
 e− µ if x ≥ 0,
fX (x; µ) = µ
otherwise.

0
The rst moment of the distribution can be calculated as follows:
Z ∞
0
µ1 (µ) = xfX (x; µ) dx
−∞

1 ∞ − µx
Z
= xe dx
µ 0
Z ∞
x ∞
h i x
−µ
= −xe + e− µ dx (integration by parts)
0 0
x ∞
h i
= (0 − 0) + −µe− µ
0

= 0 + (0 + µ)
= µ.

To estimate µ using the method of moments, we setup a single equation


in one unknown:
X̄ = m01 = µ01 (µ) = µ.
264 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

This equation shows that the method of moments estimate for µ is simply
the mean of our data.

Fitting a Normal Distribution


A normal distribution has parameters µ (the mean of the distribution) and
σ 2 > 0 (its variance). The probability density function is dened to be:

1 (x−µ)2
fX (x; µ, σ) = √ e− 2σ2 ,
2πσ 2
and the rst two moments of the distribution are:

µ01 (µ, σ 2 ) = µ,
µ02 (µ, σ 2 ) = µ2 + σ 2 .

(see Hahn and Shapiro (1967) for derivations of these equations).


To estimate µ and σ 2 using the method of moments, we setup two equa-
tions in two unknowns:

m01 = µ01 (µ, σ 2 ) = µ,


m02 = µ02 (µ, σ 2 ) = µ2 + σ 2 .

and solve for µ and σ 2 to get:

µ = m01 = X̄,
σ 2 = m02 − m02 2
1 = S .

Thus, the method of moments estimate for µ is the mean of our samples, and
the estimate for σ 2 is the (population) variance of our samples.

Fitting a Gamma Distribution


A gamma distribution has parameters k > 0 (the shape parameter) and θ > 0
(the scale parameter). The probability density function is dened to be:

1 x
fX (x; k, θ) = k
xk−1 e− θ ,
Γ(k)θ
7.6. INPUT MODELLING FOR SIMULATION STUDIES 265

where Γ denotes the gamma function. The rst two moments of the distri-
bution are:

µ01 (k, θ) = kθ,


µ02 (k, θ) = kθ2 (k + 1).

(see Hahn and Shapiro (1967) for derivations of these equations).


To estimate k and θ using the method of moments, we setup two equations
in two unknowns:

m01 = µ01 (k, θ) = kθ,


m02 = µ02 (k, θ) = kθ2 (k + 1).

and solve for k and θ to get:

m02
1 X̄ 2
k= 0 = 2,
m2 − m02
1 S
m02 − m02
1 S2
θ= = .
m01 X̄

7.6.4 The Method of Maximum Likelihood


Let's consider again the 20 samples from a normal distribution we discussed
earlier:
19.58592 21.22451 10.46682 15.17291
12.57538 24.64593 22.51534 15.58736
17.10298 22.23960 24.83332 21.09920
24.59406 16.76235 18.80357 13.17760
18.35137 20.99394 23.89557 22.27927

Suppose we wanted to determine whether it is more likely that the samples


were drawn from a distribution with parameters µ = 20 and σ 2 = 16, or from
a distribution with parameters µ = 12 and σ 2 = 25. We could go about this
by evaluating the likelihood function:
n
Y
2
L(µ, σ ) = fX (xi ; µ, σ 2 ).
i=1
266 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

In the case of our data:


L(20, 16) ≈ 1.23442 × 10−25 and L(12, 25) ≈ 5.65825 × 10−35 .
So we can see that there is a higher likelihood that the samples came from
a distribution with parameters µ = 20 and σ 2 = 16 than from a distribution
with parameters µ = 12 and σ 2 = 25.
Now suppose that we were to use dierential calculus to nd values for
µ and σ 2 at which L is maximal. For our data, we would nd this happens
when:
µ ≈ 19.29535, σ 2 ≈ 17.50245 and L(20, 16) ≈ 1.75496 × 10−25 .
In so doing, we have managed to nd parameters for the distribution from
which our samples were most likely to have been drawn. When we do this,
we are using the method of maximum likelihood to nd the parameters of
the distribution.
In general, if x1 , . . . , xn are random deviates sampled from a random
variable X that we believe can be modelled by a probability distribution
with parameters θ1 , . . . , θq , then the likelihood function L is dened to be:
n
Y
L(θ1 , . . . , θq ) = fX (xi ; θ1 , . . . , θq ).
i=1

We then proceed to nd θ1 , . . . , θq for which L is maximal.


In practice however, we will nd this to be dicult. The reason for this
is that (for most distributions) L will be dicult to dierentiate. A simple
work-around for this is to consider the log-likelihood function:
n
X
l(θ1 , . . . , θq ) = ln(L(θ1 , . . . , θq )) = ln(fX (xi ; θ1 , . . . , θq )),
i=1

and since the logarithm function is monotonically increasing, the θ1 , . . . , θq


for which l is maximal will also be the θ1 , . . . , θq for which L is maximal.

Fitting a Normal Distribution


A normal distribution has parameters µ (the mean of the distribution) and
σ 2 > 0 (its variance). The probability density function is dened to be:
1 (x−µ)2
fX (x; µ, σ) = √ e− 2σ2 .
2πσ 2
7.6. INPUT MODELLING FOR SIMULATION STUDIES 267

If we have samples x1 , . . . , xn drawn from a normally distributed random


variable X , then the log-likelihood function is:

n
X
2
l(µ, σ ) = ln(fX (xi ; µ, σ 2 ))
i=1

n  
X 1 (xi −µ)2

= ln √ e 2σ 2

i=1 2πσ 2

n  
(xi − µ)2
 
X 1
= ln √ −
i=1 2πσ 2 2σ 2

n
n n 2 1 X
= − ln(2π) − ln(σ ) − 2 (xi − µ)2 .
2 2 2σ i=1

We rst compute the partial derivative with respect to µ:

n
∂ 1 X ∂
l(µ, σ 2 ) = − 2 (xi − µ)2
∂µ 2σ i=1 ∂µ

n
1 X
= (xi − µ)
σ 2 i=1

n n
!
1 X X
= xi − µ
σ2 i=1 i=1

n
= (X̄ − µ).
σ2

And since both n and σ 2 are strictly positive, this can only be zero when:

µ = X̄.
268 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Next, we compute the partial derivative with respect to σ 2 :


n
!
∂ 2 n 1 X
2 ∂ 1
l(µ, σ ) = − − (x i − µ)
∂σ 2 2σ 2 2 i=1 ∂σ 2 σ 2
n
n 1 X
=− + (xi − µ)2
2σ 2 2(σ 2 )2 i=1
n
!
1 1 X
= 2 −n + 2 (xi − µ)2 .
2σ σ i=1

And again, since both n and σ 2 are strictly positive, this can only be zero
when: n
2 1X
σ = (xi − µ)2 .
n i=1
Substituting in µ = X̄ from above gives:
n
2 1X
σ = (xi − X̄)2 = S 2 .
n i=1
Thus, the method of maximum likelihood estimate for µ is the mean
of our samples, and the estimate for σ 2 is the (population) variance of our
samples. Note that for the normal distribution, the estimates obtained by
the method of maximum likelihood are the same as the estimates obtained
by the method of moments. This will not be the case in general.

7.6.5 The Method of Least Squares


The nal method we present, that of least squares, will generally only be
usable in cases where some form of numeric non-linear optimization is avail-
able. See for example the well-known downhill simplex method of Nelder and
Mead (1965).
Suppose that x1 , . . . , xn are random deviates sampled from a random
variable X that we believe can be modelled by a probability distribution with
parameters θ1 , . . . , θq and cumulative distribution function FX . We dene a
function R as follows:
n 
X 2
R(θ1 , . . . , θq ) = F̂n (xi ) − FX (xi ; θ1 , . . . , θq ) .
i=1
7.6. INPUT MODELLING FOR SIMULATION STUDIES 269

where F̂n is the empirical distribution function dened by x1 , . . . , xn . We now


proceed to use our non-linear optimization algorithm to nd the θ1 , . . . , θq
at which R is minimal. Note that most optimization algorithms will require
an initial guess for the values of θ1 , . . . , θq . For the tting of probability
distributions, this initial guess will usually be the θ1 , . . . , θq provided by either
the method of moments or the method of maximum likelihood. The least
squares method then renes the values of the parameters.
The method of least squares compares the empirical distribution function
to the cumulative distribution function of the theoretical distribution we are
using to model the samples. It sums up the squares of the residuals (the
distance between the value of the empirical distribution function and the
theoretical cumulative distribution function at each of the xi ), and attempts
to nd parameters that make this sum as small as possible. The process is
illustrated in Figure 7.16 below.

1.0
Cumulative Probability

0.8
Residual at x6
0.6

0.4

0.2 FX
F̂n
0.0
x1 x2 x3 x4 x5 x6 x7 x 8 x9 x10
Observations

Figure 7.16: The Method of Least Squares

7.6.6 Testing for Goodness of Fit


Having parameterized a distribution one should check for the goodness of
t by comparing the tted distribution to the empirical distribution and
assessing the quality of the t obtained. Usually one would perform the
270 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

goodness of t test by using statistical tests (like the Chi-Square or the K-S
tests), or visually assessing the quality of the t.

Pearson's Chi-Squared Test


The Pearson chi-squared test is based on the measurement of the discrep-
ancy between the histogram of the sample and the tted probability density
function. When the discrepancy is large enough the test rejects the tted
model.
Suppose that x1 , . . . , xn are random deviates sampled from a random vari-
able X and that we have tted a probability distribution with parameters
θ̂1 , . . . , θ̂q , probability density function fˆX , and cumulative distribution func-
tion F̂X to the data. We wish to use Pearson's chi-squared test to determine
whether this probability distribution is a good t to our samples. To do this,
we need to bin our data, i.e., we need to divide the support of our probability
distribution into adjacent (not necessarily equal) intervals:

[a0 , a1 ), [a1 , a2 ), . . . , [ak−1 , ak ],

where it is possible that a0 = −∞ and/or ak = +∞. Next, we calculate:

Nj = number of xi ∈ [aj−1 , aj ) j = 1, . . . , k,

and Z aj
pj = fˆX (x; θ̂1 , . . . , θ̂q ) dx j = 1, . . . , k.
aj−1

Note that from the way they're dened, we must have:


k
X k
X
Nj = n and pj = 1.
j=1 j=1

Finally, we then calculate the chi-squared test statistic:


k
2
X (Nj − npj )2
χ = ,
j=1
npj

which should be small" if the t is good. We're left with two important
questions:
7.6. INPUT MODELLING FOR SIMULATION STUDIES 271

1. How small does the test statistic χ2 need to be for the t to be consid-
ered good?

2. How do we determine the intervals [a0 , a1 ), [a1 , a2 ), . . . , [ak−1 , ak ]?

We deal with each question below.

Interpreting the Test Statistic. If the xi 's were actually drawn from the
distribution we're considering, then (in theory) the test statistic χ2 is chi-
squared distributed with k − 1 − q degrees of freedom. This means that we
can determine the probability that the xi 's were drawn from our distribution
by calculating the area of the tail of a chi-squared distribution with k − 1 − q
degrees of freedom as shown in Figure 7.17.

Chi-squared distribution
with k − 1 − q degrees of
freedom

Area = probability
xi 's drawn from
tted distribution

0 χ2

Figure 7.17: Interpretation of the χ2 Test Statistic

Prior to using the chi-squared test, it is customary to choose a signicance


level α ∈ (0, 1). The signicance level represents the certainty we desire;
thus if we wanted to be 95% certain that the xi 's were drawn from our
tted distribution, then we would chose α = 0.95 and we would accept the
distribution as a good t if and only if the area of the tail of the chi-squared
distribution is bigger than or equal to α.

Determining the Intervals. There is no generally accepted procedure to


determine the intervals and no procedure can be guaranteed to produce good
272 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

results in all cases. Nevertheless, a number of methods have been published.


We present that of Mann and Wald (1942) which is intended to be used when
n ≥ 200.
The rst step is to determine the number of intervals k . In their paper,
Mann and Wald suggest the choice:
& s '
2(n − 1)2
k= 45 2
,
z1−α

where α is the signicance level of the test and z1−α is the standard normal z -
value corresponding to the probability 1 − α. We then generate the intervals
by setting:  
−1 j
aj = F̂X j = 1, . . . , k,
k
where it is possible that a0 = −∞ and/or ak = +∞. By dening the intervals
in this way they become equiprobable, i.e.:
Z aj
1
pj = fˆX (x) dx = j = 1, . . . , k.
aj−1 k

The Kolmogorov-Smirnov Test


The Kolmogorov-Smirnov test (or simply the K-S test) is based on measuring
the largest discrepancy between the empirical distribution function dened
by the samples and the tted cumulative distribution function.
Suppose that x1 , . . . , xn are random deviates sampled from a random
variable X and that we have tted a probability distribution with cumulative
distribution function F̂X to the data. We wish to use Kolmogorov-Smirnov
test to determine whether this probability distribution is a good t to our
samples.
First, we may assume (without loss of generality) that the xi 's are sorted,
i.e., that:
x1 ≤ x2 ≤ · · · ≤ xn−1 ≤ xn .
Next, we need to calculate two numbers:
 
+ i
D = sup{F̂n (x) − F̂X (x) : x ∈ R} = max − F̂X (xi ) : i = 1, . . . , n ,
n
7.6. INPUT MODELLING FOR SIMULATION STUDIES 273

and
 
− i−1
D = sup{F̂X (x) − F̂n (x) : x ∈ R} = max F̂X (xi ) − : i = 1, . . . , n .
n
The Kolmogorov-Smirnov test statistic is then:

D = max{D+ , D− }.

The process is illustrated graphically in Figure 7.18 below.

1.0
Cumulative Probability

0.8

0.6 D−

0.4

0.2 FX
+ F̂n
D
0.0
x1 x2 x3 x4 x5 x6 x7 x 8 x9 x10
Observations

Figure 7.18: The Kolmogorov-Smirnov Test

Once obtained, the test statistic D is compared to a value from a table


of critical K-S values. When the computed D is larger than the theoretical
one, the t is rejected.

Visual Assessment of the Quality of the Fit


Among the most widely used techniques in measuring the goodness of t
of a theoretical distribution to an empirical one is the method of visual in-
spection of the t. Although the method does not bear much statistical or
mathematical weight, it proves to be as eective as any other test. The visual
assessment of the quality of t is usually conducted in conjunction with the
statistical tests as an assurance of the test results. Basically, the method is
simply to plot both the empirica and tted CDFs on one plot and compare
274 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

how well the tted CDF tracks the empirical one. Alternatively one can
compare how well the shape of the sample histogram compares to that of
the theoretical PDF. When the CDF is available, it is always better to use
it in your comparison because, as previously mentioned, a histogram can be
easily distorted and can attain any desired shape.

7.7 Output Analysis


Proper analysis of simulation results is critical to good decision making. Law
and McComas (1986) point out that one of the most common (and poten-
tially dangerous) practices in simulating manufacturing systems is that of
making only one run of a stochastic simulation." Making decisions based
on a stochastic simulation of a system with one replication can be costly.
The problem is often aggravated when the system is associated with high
degrees of variability, examples of which occur when modelling construction-
equipment breakdowns.
For simulation models in which all aspects are deterministic, one simu-
lation run is sucient to determine the output. A stochastic simulation on
the other hand will not produce the same output when run repeatedly with
independent random seeds. This requires one to make a number of runs with
independent seeds for the random-number-generating streams to ensure that
a true picture of the system under investigation is provided. Typically, a
simulationist collects sample output from the various runs conducted and
then uses the sample as the basis for decision-making.
A typical analysis of simulation output usually includes determination
of the following: whether the simulation is deterministic or stochastic, and
whether the simulation reects a static, transient, or steady state. The fol-
lowing discussion of output analysis is specic to that range of simulation
models that can be classied as transient simulations. Wilson (1984) denes
transient simulation as follows: a simulation is transient if the modelling
objective is to estimate parameters of a time-dependent output distribution
over some portion of a nite time horizon for a given set of initial conditions."
Most construction operations would be covered by this denition.
Wilson (1984) categorized the analysis of transient simulation by whether
or not normal distribution theory can be applied to the analysis. Two types
of analysis are relevant: analysis of output parameters that do not signi-
cantly deviate from normality, and analysis of output parameters that have
7.7. OUTPUT ANALYSIS 275

non-normal responses. The second has not been frequently encountered in


simulation of construction processes. An extensive treatment of the analysis
of output data can be found in Welch (1983).

7.7.1 Checking for Normality


Assuming that our target output parameter is denoted by X , one would col-
lect a sample {x1 , . . . , xn } of results of n distinctly seeded simulation runs for
the same initial conditions. To apply normal theory, the sample {x1 , . . . , xn }
should be normally distributed. A number of methods can be used to test
the hypothesis that {x1 , . . . , xn } originated from a normal distribution.
In general, if we postulate that {x1 , . . . , xn }, which originated from the
distribution FX , has the same shape as some known distribution FY , then to
test the hypothesis that these two distributions dier only in location and
scale parameters, one can form the ordered statistics:

x(1) ≤ x(2) ≤ · · · ≤ x(n) ,

and plot the points


  
−1 i−c
x(i) , FY i = 1, . . . , n
n

where c is a constant (usually set to 0.5) that depends on the distribution FY .


A plot that is linear (or approximately linear) is considered an indication of
a true hypothesis (Hahn & Shapiro, 1967).

The Shapiro-Wilk Test


The Shapiro-Wilk W statistic (Shapiro & Wilk, 1965), has been shown to pro-
vide an excellent test for normality of a sample of data (Pearson, D'Agostino,
& Bowman, 1977). The test was further extended by Royston (1982) to en-
able testing of samples of up to 2000 observations. Hahn and Shapiro (1967)
provide good coverage of the test for samples of 50 observations or less.
In general terms, the Shapiro-Wilk test statistic W can be calculated as
follows: Pn 2
i=1 ai x(i)
W = ,
(n − 1)S 2
276 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

where the ai 's are normalization regression coecients usually obtained from
tables or computer programs, x(i) is the ith ordered statistic, and S 2 is the
sample variance. The value of W is usually compared to the percentile values
of the distribution of the test at a specic level of certainty (see for example
the tabulated values in Hahn and Shapiro (1967)). In subjective terms, the
sample would be normal (or close to normal) when W is close to 1.

7.7.2 Developing Point and Interval Estimates


Estimating the Mean
Given that the output data are normally (or approximately normally) dis-
tributed, normal theory may be used to construct the condence interval
around the mean of the sample data. The unbiased estimator of the mean
µ, for a sample of the normal population {x1 , . . . , xn } is the sample mean X̄ ,
calculated as follows: n
1X
X̄ = xi .
n i=1
An exact 100(1 − α)% condence interval is then:
S
X̄ ± t(1−α/2),(n−1) √ ,
n
where t(1−α/2),(n−1) corresponds to the upper (1−α/2) point of the Student's t-
distribution with n−1 degrees of freedom and S denotes the sample standard
deviation.

Estimating the Variance


An unbiased estimator of the variance σ 2 is obtained from the variance of
the sample S 2 , which is given by:
n
2 1 X
S = (xi − X̄)2 .
n − 1 i=1

The 100(1 − α)% condence interval for σ 2 (Wilson, 1984) is then:


" #
(n − 1)S 2 (n − 1)S 2
, .
χ2(1−α/2),(n−1) χ2(α/2),(n−1)
7.7. OUTPUT ANALYSIS 277

where χ2(1−α/2),(n−1) corresponds to the upper (1 − α/2) point of the χ2 -


distribution with n − 1 degrees of freedom.

Estimating Arbitrary Quantile Points


In addition to estimating means and variances for output parameters in a
simulation experiment, a practitioner is often interested in other statistics.
Estimating an arbitrary quantile point can often be helpful in planning a
construction operation. The 95th percentile of job completion time can give a
simulationist more information than the mean completion time, for example.
The q th quantile point Xq of a random variable X is dened to be:

Xq = min{x ∈ R : q ≤ FX (x)},

where FX denotes the cumulative distribution function of X . An estimate for


Xq from a sample of observations can be obtained using an approximation
of the binomial distribution by the standard normal distribution for large
sample sizes (see Welch (1983) for an exact estimate). The estimator for Xq
is given by: r
n−1 2
Xq = X̄ + zq S ,
n
where zq is the critical value from the standard normal distribution at the
specied cuto value q .
A 100(1−α)% condence interval around the estimator Xq (Wilson, 1984)
is given by: s 
zq2
 
1 n−1 2
Xq ± z(1−α/2) 1+ S .
n 2 n

Estimating Probabilities
The probability of completing a job on time is also very valuable in a num-
ber of situations. A classic example would be the simulation of scheduling
networks (e.g., PERT type) in an attempt to determine the probability of
meeting a target date.
The cumulative distribution function FX of an output parameter X tells
us the probability that X does not exceed a particular xed value x:

FX (x) = Pr{X ≤ x} x ∈ R.
278 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Assuming that X is normally distributed (or approximately normal), the


cumulative distribution function can be estimated by:
 
x − X̄
FX (x) = Φ ,
S

where Φ denotes the cumulative distribution function (CDF) of the standard


normal distribution.
An approximate 100(1 − α)% condence interval for the probability is
given by:
   s  2
x − X̄ z(1−α/2) x − X̄ 1 x − x̄
Φ ± √ φ 1+ ,
S n S 2 S

where φ denotes the probability density function (PDF) of the standard


normal distribution.
Another way of estimating the probability (or any arbitrary quantile) is
by directly referring to the empirical CDF that results from the sample being
analyzed. This, however, produces only a point estimate of the probability
and not a condence interval.

7.8 Example: Equipment Breakdowns


7.8.1 Introduction
A signicant number of construction processes are highly equipment intensive
especially those within the heavy civil construction domain. Consequently,
equipment is regarded as a critical aspect of the construction process that
needs to be managed well to guarantee project success. The goal of most
construction engineers is to maintain a eet of equipment at the highest uti-
lization rates possible. In order to do so, construction engineers need enough
information beforehand about stochastic events that could lower equipment
utilization. Maintenance is an example of such events but can be anticipated
and scheduled. The duration associated with processing this event is what
typically makes it stochastic. Equipment failures or breakdowns are another
example of events that can deter utilization rate increases. Breakdowns are
typically stochastic and unscheduled. There are two parameters that are
used to dene breakdowns. These include the Mean Time Between Failures
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 279

(MTBF) and the Mean Time To Repair (MTTR). Given that the occurrence
of equipment failures is highly uncertain, values for the time between failure
events will also vary. An eective way to numerically represent this variation
is through the use of statistical distributions. The time that it takes to x
broken equipment is another parameter that varies quite a lot. It is also
modelled using a statistical distribution.
The most eective way of studying stochastic phenomena such as equip-
ment failures is by conducting simulation studies. These studies can generate
useful information, such as the number of times that a specic equipment
type broke down, and statistics on how long it took to x. If the simulation
models are developed in an intelligent fashion, they can be congured to gen-
erate information about the optimal number of mechanics and repair bays to
designate in order to achieve preset performance targets, e.g., maximum time
the equipment should wait before repair work can commence, the maximum
number of equipment waiting in a queue, etc.
In typical simulation studies, the statistical distributions used to repre-
sent parameters for equipment failure, repair and maintenance processes are
dened through an input modelling process. This process can be carried out
with the guidance of experts or using data collected on similar past projects.
The following example has been setup to illustrate how these types of
information can be generated for typical equipment maintenance and repair
problems within a practical setting.

7.8.2 Problem Description


A construction company in the business of mining tar sands has committed a
eet of shovels, trucks, graders, loaders, and scrapers to one of its reclamation
projects. Fleet matching computations indicated that the numbers presented
in Table 7.5 of each type of equipment were appropriate for the project. The
company stuck with these numbers to guarantee ecient operations on site.
The reclamation is scheduled to continue for several years in the same
location. Thanks to the good data collection and archiving culture at the

Table 7.5: Fleet Size for Dierent Equipment Types

Equipment Shovels Trucks Scrapers Graders Loaders


Quantity 4 12 3 2 5
280 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

company, a signicant amount of data exists after a couple of months running


this project. Data was collected, in hours, for the time between the occur-
rences of equipment failures. This data is summarized for each equipment
type in Tables 7.6, 7.7, 7.8, 7.9, and 7.10.

7.8.3 Modelling Assumptions


A number of simplifying assumptions are made in order to facilitate the
simulation modelling process. These include:

ˆ The time unit is in hours.

ˆ The time interval between subsequent events that an equipment re-


quires maintenance is xed. It is 200 hours for all types of equipment.
The interval between failures varies based on the equipment type. This
variation can be inferred from the eld data presented in Table 7.11.

ˆ The time that it takes to service equipment working on this reclamation


project is xed. It takes 5 hours to maintain equipment and 12 hours
to repair failed equipment.

ˆ Maintenance does not prevent equipment from breaking down. The two
types of events take place independently for all types of equipment.

ˆ There are two main kinds of breakdown and maintenance: the rst
kind requires a Heavy Duty Machine (HDM) crew to repair, and the
second requires a welding crew. Table 7.11 summarizes the probability
of each breakdown repair or maintenance occurring for each type of
equipment.

7.8.4 Modelling Strategy


This simulation problem is solved in four phases. The objective of the
rst phase is to demonstrate how input modelling is performed using real
project data within Simphony.NET. Statistical distributions obtained from
this phase are used for modelling the time between the occurrences of break-
down events for the dierent types of equipment. The next phase involves
developing a base model using the result of the input modelling phase and
specications detailed in the problem statement. The objective of this phase
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 281

Table 7.6: Time Between Shovel Failures (Hours)

185.4248 185.2592 189.4037 185.7541 185.5408 186.0830


191.6359 186.0898 185.5025 199.0247 190.6066 187.0579
196.7125 185.3081 200.5033 190.5554 190.6518 185.5378
185.0757 191.1709 188.2794 189.6085 199.2338 185.7370
200.8563 185.5074 195.8465 194.8978 189.9741 195.5116
189.0947 185.0732 194.3319 189.3698 200.4295 197.7001
186.5487 185.3920 186.5201 191.1891 193.6019 190.6724
199.0076 199.2735 188.4295 185.1013 185.5676 197.1232
186.9678 185.2044 193.9194 191.6954 191.8414 187.2315
186.1647 191.6158 185.7317 201.0837 195.1811 185.0427
185.9663 200.2883 192.6204 193.0617 210.9454 189.6185
189.8455 196.1845 186.9987 190.3089 190.3882 189.4124
185.9223 193.8499 195.4447 192.4796 185.2915 188.7804
186.0380 198.3606 185.7923 186.9930 186.4467 185.4572
194.1041 194.5073 191.7727 196.9237 185.1362 189.9352
189.0194 186.6056 189.7998 186.7301 202.2978 191.1455
186.6614 193.3671 189.7143 185.0518  
Table 7.7: Time Between Truck Failures (Hours)

151.3896 177.7637 157.0715 150.8631 161.0016 157.9386


155.4598 156.0307 171.8720 150.1613 153.9945 181.1125
164.2686 172.2080 176.1497 150.0360 156.9124 157.2083
170.8017 150.7494 191.1156 175.5760 156.7216 174.8832
163.6556 182.4185 150.8230 166.4273 179.6086 150.7415
152.8602 170.8195 151.5536 199.4599 202.4538 163.4710
150.6846 155.1612 164.4741 186.1770 150.7764 161.9803
150.7309 172.0229 150.0001 155.1020 153.5869 158.2819
154.3500 151.3852 153.9681 152.3795 172.9829 162.2189
152.0010 150.9767 150.4958 176.4272 159.2532 156.4382
156.8708 153.8189 184.4131 188.8821 173.6428 155.7564
161.0489 150.3329 153.5670 153.8863 177.0462 162.8833
156.9632 173.6662 152.2037 150.3045 202.3715 154.4010
186.9501 194.2155 160.2789 167.3139 168.2202 152.5418
160.8250 198.6551 158.4568 185.8983 163.9519 150.9801
153.1987 159.6937 150.9733 165.8393 150.2937 151.7191
177.7009 162.4489 150.1690 172.4434  
282 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Table 7.8: Time Between Scraper Failures (Hours)

192.1258 197.7851 190.1820 191.3455 201.8511 190.0827


201.0200 193.1637 191.5034 199.3544 193.1740 202.1897
201.4117 209.3637 199.4640 197.0075 199.0565 194.0579
191.1465 190.5916 219.0495 190.0930 193.4391 207.8424
190.2055 195.6602 210.0129 210.9467 204.0846 192.8526
192.5349 195.3966 191.0899 209.7804 194.2100 216.5200
200.5810 207.8807 201.7733 197.9856 192.8677 190.0334
194.6267 211.9233 190.2142 195.5253 190.8610 207.5175
190.2578 197.6171 192.1617 201.9235 206.4042 207.7870
193.2515 208.5249 199.4605 197.1965 193.3291 190.1457
197.6628 202.4600 217.9326 190.7231 196.6221 205.3308
195.9100 191.6810 191.8078 198.4009 199.4223 203.8626
197.7997 213.7550 201.8805 197.9103 198.3025 191.1753
209.2756 190.5571 192.2528 190.0340 218.7764 210.5136
192.5293 209.9892 190.6193 208.2666 190.5202 194.0485
196.3495 209.8592 198.8416 206.1132 205.3396 206.6699
212.2177 190.3143 204.4835 194.9456  
Table 7.9: Time Between Grader Failures (Hours)

182.6145 222.6758 197.2685 181.0053 233.0109 181.0894


188.3663 204.7368 180.7427 180.4094 196.0505 196.6653
227.2125 182.5374 188.6462 195.7062 181.7822 180.2508
181.8483 211.8097 180.0029 208.2717 183.2510 181.8167
244.2046 180.0156 199.8327 235.4490 180.0041 192.2826
230.5785 234.5250 208.0268 196.0776 180.0864 181.5653
228.6923 180.2302 180.7475 208.0898 264.4370 183.3545
193.6938 214.2873 207.0179 180.0012 236.8091 190.2429
192.3086 180.3708 192.4743 180.1476 190.0638 246.6106
198.6127 210.0188 180.0744 181.0519 180.4266 220.3105
180.1303 208.0642 213.2641 200.9105 194.3078 203.1293
208.4600 266.3075 208.6791 188.5582 223.5930 264.4582
192.2846 246.5676 202.9219 204.8117 191.1856 180.2206
211.2497 246.8050 202.8321 214.0543 182.2835 195.5072
201.5821 218.9913 183.2508 197.7786 217.0301 247.0352
187.1567 205.9979 190.9205 180.5019 200.9293 198.4087
186.7887 245.8728 182.1134 211.6624  
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 283

Table 7.10: Time Between Loader Failures (Hours)

212.1505 240.3805 249.1395 193.9391 249.9012 249.5195


249.1236 200.3434 247.0920 249.7770 244.2286 247.0666
212.8566 249.9840 228.3115 224.5852 210.9756 231.6409
185.6555 247.4253 240.5908 216.1042 249.9740 245.6097
192.9396 243.2973 210.0375 236.1423 186.0401 244.3932
219.3626 230.9094 226.3451 216.7116 222.7115 232.0361
229.6320 190.0570 191.7105 248.0912 249.8851 249.9549
240.0442 206.8480 245.8274 182.1641 239.6035 247.1896
204.6390 225.1254 229.6279 247.1347 210.4734 236.7776
249.9879 228.0621 239.8073 207.6594 243.3582 243.7468
196.2722 249.8891 225.7151 201.5837 212.4515 248.7050
247.6994 249.8287 188.1743 225.1386 249.9730 216.0447
245.1261 244.8491 245.1146 247.2068 249.8330 237.0649
249.1071 249.7583 248.8989 247.4713 237.9781 249.7083
249.2395 241.4706 239.0804 240.7844 191.0978 249.7788
206.9419 242.9631 237.6764 199.9880 249.9924 232.9134
224.0352 238.5278 243.8780 248.1835  

Table 7.11: Service Type Probabilities

Equipment HDM Welder


Shovel 67% 33%
Truck 82% 18%
Scraper 74% 26%
Grader 83% 17%
Loader 73% 27%
284 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

is to obtain the number of servers for HDM and welder resources that pro-
duce satisfactory wait times for equipment requiring service, i.e., less than
one hour.
The last two phases are embellishments to the base model. The rst
of these, i.e., embellishment one, utilizes the optimal number of resource
servers obtained from the base model to determine the hours spent in service
annually by each piece of equipment. Total service times are tracked for an
entire eet for each equipment type. Also, averages for annual service times
are found for a single unit for each equipment type. This phase also seeks
to report the number of times each equipment type required a specic type
of service and the wait times associated with these. The last embellishment,
i.e., embellishment two, investigates the benets of implementing a planned
policy for servicing equipment. This policy involves prioritizing the service of
the equipment type that has the highest need for service annually. The merits
of implementing this policy are evaluated based on improvements obtained
in waiting times for equipment to receive service. Each of these phases is
discussed in detail in the following sections.

7.8.5 Input Modelling


In order to perform input modelling, the data summarized in Tables 7.6,
7.7, 7.8, 7.9, and 7.10 is imported into the Simphony simulation software.
Simphony's input modelling services are then used to t the appropriate
statistical distributions. Given that the input modelling process is performed
within the same environment as that in which the simulation model will be
developed and executed, the concerns of selecting statistical distributions
that are supported by the simulation environment in which the nal model
is to be built and subsequently run are no longer an issue.
Simphony supports two le formats for importing data to be used in input
modelling. These include text les such as notepad and Comma Separated
Value (CSV) les. In both cases, all the data needs to be assembled within
one column. The data in Tables 7.6, 7.7, 7.8, 7.9, and 7.10 was imported as
notepad les. There are two important issues that need to be looked into
when performing input modelling. The rst relates to the selection of an
appropriate method for estimating the parameters of statistical distributions
to be t to the data. The other relates to selection of a goodness of t test
that will guide the choice of the best statistical distribution from the tted
options. In this example, the distribution tting method used is the least
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 285

squares, while the Kolmogorov-Smirnov (K-S) test is used for testing the
appropriateness of the t.
The statistical distributions chosen for modelling the interval between
equipment failure events based on these two criteria are summarized in Ta-
ble 7.12.
It is important to note that this selection was restricted to statistical
distributions that are bound to the left and that are non-negative because
random deviates sampled used to model duration in simulation cannot be
negative.
Visual inspection of the input modelling results is another common way of
assessing the goodness of t for theoretical distributions. This inspection can
be done by assessing the degree of dispersion of the theoretical distribution
from the empirical distribution on a Probability Density Function (PDF) or
a Cumulative Density Function (CDF). In this example, visual inspection
is performed on CDF. Figure 7.19 shows an overlay of the theoretical and
empirical distributions for the mean time between truck failure events. Beta
is the theoretical distribution chosen in this case.
These distributions are dened as an input to the duration property of
the appropriate Task modelling elements so that Simphony is able to use
these to schedule the failure events, hence, emulating real life failures for the
equipment.

7.8.6 Base Model


The simulation modelling approach adopted represents the object instances
for each equipment unit as a simulation entity. This means that there are
entities that represented each equipment type, namely, shovel entities, truck
entities, scraper entities, grader entities and loader entities. The crews re-
sponsible for performing maintenance or repair servicing this equipment are

Table 7.12: Fitted Distributions

Equipment Distribution Type Distribution Parameters


Shovel Pareto(Shape, Scale) Pareto(28.151, 184.499)
Truck Beta(A, B, Low, High) Beta(0.538, 1.894, 150.129, 210.027)
Scraper Pareto(Shape, Scale) Pareto(18.356, 189.389)
Grader Pareto(Shape, Scale) Pareto(7.353, 177.205)
Loader Triang(Low, Mode, High) Triang(184.291, 250.103, 263.507)
286 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

1.0

Cumulative Probability
0.8

0.6

0.4

0.2 Empirical CDF


Theoretical CDF
0.0
150 160 170 180 190 200
Observed Quantile

Figure 7.19: Theoretical vs. Empirical CDF for Truck Breakdowns

modelled as Resource modelling elements, each with a le associated with it


to emulate the equipment requiring servicing queuing at it.
The development of a simulation model for the problem is based on two
assumptions:

ˆ The mine operates 24 hours a day, 7 days a week, and 365 days a year.

ˆ Service requirements related to repair or maintenance for each piece of


equipment occur independently.

In order to fulll the rst modelling assumption, the simulation model


is setup in such a way that there are no breaks. For consistency, modelling
units used are hours. Consequently, each simulation run is setup to execute
for 8,760 hours, emulating the total number of hours in a calendar year. The
second assumption is fullled by setting up the simulation model in such a
way that it has separate sub-models that emulate maintenance requirements
and repairs independently. This can be seen in Figure 7.20.
Create modelling elements are congured to release entities at the start
of simulation. Each equipment type has a unique Create element designated
to it. Equipment unit entities created are routed into a counter modelling
element that registers the number of unit entities that owed through it.
Thereafter, entities are routed into a Generate modelling element that cre-
ates one copy of each entity transferred into it through a cloning process.
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 287

Figure 7.20: Base Model for Mining Equipment Maintenance and Repair
288 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

The original equipment unit entities that arrived at the Generate element
are routed out through the top output port, then into a Counter modelling
element, and nally into a Composite modelling element that contains a
model layout that emulates the operation and maintenance of these entities.
Clones of the original equipment unit entities are routed out of the Generate
modelling element through its bottom output port. These then ow through
a Counter modelling element, and nally into a Composite modelling element
that emulates the operation and repair of the equipment that the entities rep-
resent. The Counter elements are used at these strategic locations to verify
that the right number of entities got to that part of the simulation model.
The model layouts within all Composite elements that emulate equip-
ment operation and maintenance are the same. Also, model layouts within
Composite elements that emulate equipment operation and repair are the
same. Entities routed into them keep owing in a cyclic fashion triggering
the schedule and processing of simulation events until the simulation is ter-
minated. It is important to state that all equipment (i.e., entities) is serviced
by resources (i.e., welders and HDMs) on a First-In-First-Out (FIFO) basis
in this version of the model.

Operation and Maintenance of Trucks


The model layout within the Composite that emulates the operation and
maintenance of trucks is chosen for purposes of explaining the logic used
to simulate operation and maintenance of all equipment types. Figure 7.21

Figure 7.21: Truck Maintenance Model Layout


7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 289

summarizes the model layout encapsulated within the truck maintenance


Composite element. This layout is similar to that used within the Com-
posite element that emulates the operation and repair of trucks. The only
dierence is in the durations specied for the tasks that model the mainte-
nance activity and repair activity, i.e., 5 hours and 12 hours, respectively.
Also, the time to the next maintenance service (200 hours) diers from the
time to the next repair (specied as a statistical distribution). Apart from
these variations, other aspects of the truck maintenance model layout are the
same as those in the truck repair model layout. The model layouts used to
mimic the operation, maintenance, and repair of trucks are identical to the
model layouts used to model the same for other types of equipment. The only
dierence that exists is in the probability values specied in the probabilistic
branch element for deciding the type of maintenance and repair experienced,
i.e., HDM or welder. Therefore, there is no need to discuss model layouts
within other Composite elements that emulate operation, maintenance and
repair of other types of equipment if a discussion is presented on the truck
maintenance model layout. Details for the truck operation and maintenance
model layout are presented next.
The origin and journey of truck entities that are routed into this Com-
posite element have already been described in the section that presented an
overview of the base model. Truck entities routed into the truck operation
and maintenance composite element start their journey at an input port la-
belled Start Maintenance Cycle. Initially, all the entities arrive at this port
at the same time, i.e., at time zero. These truck entities then ow through a
Counter element labelled Count Trucks that conrms the number of truck
entities that entered this Composite element.
The truck entities are then transferred into a Task element labelled To
Next Maintenance, where each is delayed for 200 hours. This 200 hour
delay represents scheduling of a maintenance service requirement 200 hours
into the future for each truck entity. When this time arises, the truck entities
are transferred out of the Task element then into the Execute element labelled
Time Stamp Truck.'. It is decided that it is necessary to track the amount
of time that truck entities take in service for future purposes. To do this,
every truck entity had to be time stamped as soon as it required service so
that the time that it took to service can be determined when it is completed.
Therefore, truck entities are time stamped, i.e., the current simulation time
is stored in the LX(0) attribute of each entity as soon as they are transferred
into the Execute element using the following formula:
290 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Figure 7.22: Model Layout for Truck HDM Maintenance

LX (0) = TimeNow
Return True

After truck entities have been time stamped, they are routed out of this
Execute element into a Probabilistic branch element labelled Truck Main-
tenance Type?. At this element, Simphony will route each truck entity out
the top output port with an 82% chance implying that the truck requires
an HDM resource for maintenance. Alternatively, Simphony routes the en-
tity out the bottom output port implying that the truck requires a welder
resource for the maintenance activity.
Truck entities requiring an HDM maintenance are routed into a Com-
posite element labelled Truck HDM Maintenance, while those that require
welder maintenance are routed into a Composite element labelled Truck
Welder Maintenance. The model layouts within these two Composite ele-
ments are identical. That for the Composite labelled Truck HDM Mainte-
nance is presented in Figure 7.22 and discussed.
Truck entities arriving at the maintenance Composite are routed into the
embedded model via the input port labelled Start HDM Maintenance 1.
Each truck entity then ows through a Counter element labelled HDM Main-
tenances 1 then into a Capature element labelled Truck Captures HDM 1.
When a truck entity gets transferred into this Capture element, it requests
one server of the HDM resource (labelled HDM(s)) with a priority of zero.
If there are no servers available, this request is queued in the File element la-
belled HDM(s)Q. When the queued request for the truck entity is fullled,
the truck entity is routed into the Task modelling element labelled HDM
Maintains Truck, where it is retained for 5 hours emulating maintenance
work being done on the truck. After maintenace work is done, the truck en-
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 291

tity is routed into a release modelling element labelled Truck Releases HDM
1, where it releases the server of the HDM resource that had been granted
to it. The truck entity then ows through a Counter element labelled HDM
Maintenances 2, then into an output port labelled End HDM Maintenance
1, where it is transferred out of the truck HDM maintenance Composite.
Truck entities leaving the Composite elements labelled Truck HDM Mainte-
nance, and the Truck Welder Maintenance are transferred into an Execute
element labelled Compute Service Time where they trigger the evaluation
of the following formula:
GX (1) = GX (1) + TimeNow - LX (0)
Return True

This formula evaluates the time that the truck entity was not working
due to maintenance work that it required. It bases this computation on the
value of the current simulation engine time and the value time stamped in the
LX(0) attribute of the entity. Note that this non-working time will include
the time that the truck had to wait for the required resource and the time
that the truck was actually being maintained. Every time this computation is
done, the result is used to obtain a new cumulated value for non-working time
for the truck since the start of simulation. This cumulated value is stored
in a designated global attribute, i.e., GX(1). GX(1) was designated for the
trucks, while other attributes were designated for other types of equipment.
See Table 7.11 for these details. A global attribute is used for this purpose
for two reasons:

ˆ To facilitate updating the cumulated non-working time for a type of


equipment soon after maintenance and repair.

ˆ To facilitate the retrieval of these values at the end of simulation so


that they can be used in the generation of desired charts.

After this computation is completed, the truck entity is routed out of


the Execute element labelled Compute Service Time, back into the Task
element labelled To Next Maintenance. When the entity is routed into this
Task element, it schedules its next maintenance event at a time 200 hours
from the current simulation time. The steps that follow are the same as those
already described, hence, the cyclic loop for the truck entity maintenance that
repeats itself until simulation is terminated.
292 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Figure 7.23: Resource Optimization and Termination Model

The cyclic loop that emulates the operation and subsequent repair of
trucks is identical to that just described, hence, there is no need to discuss
it. There are only two dierences. These include:

ˆ The time between truck repairs is sampled from the appropriate statis-
tical distribution dened in the input modelling section of this exercise.

ˆ The time to repair trucks is 12 hours.

Simulation Execution Sequence


The simulation model developed for this part of the example is unique be-
cause the point at which simulation terminates is not known beforehand.
However, it follows the typical simulation sequence, i.e., initialization, execu-
tion and termination, but with variations in the way that the initialization,
execution and termination are setup. To a large extent, all initializations
and model termination details are handled by the model layout shown in
Figure 7.23. This model layout is embedded within a Composite element
labelled Resource Optimizer & Terminator that is a part of another layout
presented in Figure 7.20.
In the following sections, details are presented on how the initialization is
done, and the search for optimal number of servers for resources, i.e., model
execution and the termination of the simulation.

Parameter Initialization At the start of each simulation experiment,


there are a number of parameters that need to be initialized so that the
simulation executes and terminates in the right way. In this model, there are
two parameters that require initialization. They include:

ˆ Resetting the number of servers for each resource to a value of one so


that the search for the optimal number can be performed well.
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 293

ˆ Resetting the Limit property to its default value of zero. This prop-
erty of the Counter labelled Termination Flag (Limit) is used as a
place holder for the termination ag. Initialization of this counter prop-
erty prevents the simulation from being terminated pre-maturely.

Both initializations are done at the Execute element labelled Update


Resources or Terminate. Both initialization requirements are catered to
by embedding the relevant formula within the Expression property of the
Execute element. The following code snippet represents the formula used to
achieve these initializations:
If Engine . RunIndex = 0 Then

Dim R1 As Simphony . General . Resource = _


Scenario . GetElement ( Of Simphony . General . Resource ) _
(" HDM ( s )")
Dim R2 As Simphony . General . Resource = _
Scenario . GetElement ( Of Simphony . General . Resource ) _
(" Welder ( s )")

R1 . Servers = 1
R2 . Servers = 1

Dim C1 As Simphony . General . Counter = _


Scenario . GetElement ( Of Simphony . General . Counter ) _
(" Terminator Flag ( Limit )")
C1 . Limit = 0

End If

This formula is evaluated every time an entity is transferred into the Ex-
ecute element. In this model, the Create modelling element labelled Create
Entity is setup to release an entity at the start of each simulation run. This
entity is then transferred into the Execute element, triggering the evaluation
of the formula. A check is inserted using an If. . . Then statement to deter-
mine if the rst run is currently being simulated, i.e., it is the start of a new
simulation experiment. If this is the case, all required initialization is done
by the formula.

Server Optimization After the servers for HDM(s) and welder(s) re-
sources have been initialized, the simulation experiment proceeds and is only
terminated when satisfactory results have been obtained from a simulation
294 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

run. The number of servers is incremented between runs in the course of


searching. The ow chart shown in Figure 7.24 summarized the simulation,
search and termination logic.
The simulation model is setup to execute an additional simulation run
every time that the results from the run that was just completed are not
satisfactory. This process continues until satisfactory results are obtained.
The following Visual Basic code snippet was embedded within the Finalize
property of the Execute element labelled Update Resources or Terminate
to achieve this:
Dim HDMWaitingTime As Double = 0.0
Dim WelderWaitingTime As Double = 0.0

Dim HDMFile As Simphony . General . File = _


Scenario . GetElement ( Of Simphony . General . File ) _
(" HDM ( s ) Q ")

Dim WelderFile As Simphony . General . File = _


Scenario . GetElement ( Of Simphony . General . File ) _
(" Welder ( s ) Q ")

HDMFile . InnerFile . WaitingTime . RunIndex = Engine . RunIndex


HDMWaitingTime = HDMFile . InnerFile . WaitingTime . Maximum

WelderFile . InnerFile . WaitingTime . RunIndex = _


Engine . RunIndex
WelderWaitingTime = _
WelderFile . InnerFile . WaitingTime . Maximum

Dim Resource_HDM As Simphony . General . Resource = _


Scenario . GetElement ( Of Simphony . General . Resource ) _
(" HDM ( s )")

Dim Resource_Welder As Simphony . General . Resource = _


Scenario . GetElement ( Of Simphony . General . Resource ) _
(" Welder ( s )")

TraceLine (" Run Count : " & ( Engine . RunIndex + 1) & _


" HDM Servers = " & Resource_HDM . Servers & _
" Welder Servers = " & Resource_Welder . Servers )

TraceLine (" Run Count : " & ( Engine . RunIndex + 1) & _


" HDM Max . Waiting time = " & _
System . Math . Round ( HDMWaitingTime ,2) & _
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 295

" Welder Max . Waiting time = " & _


System . Math . Round ( WelderWaitingTime ,2))

TraceLine ("****************************************" & _


"**************************************************")

If HDMWaitingTime <= 1.0 And WelderWaitingTime <= 1.0 Then


Dim C1 As Simphony . General . Counter = _
Scenario . GetElement ( Of Simphony . General . Counter ) _
(" Terminator Flag ( Limit )")
C1 . Limit = 1.0
Else
If HDMWaitingTime > 1.0 Then
Resource_HDM . Servers = Resource_HDM . Servers + 1
End If
If WelderWaitingTime > 1.0 Then
Resource_Welder . Servers = _
Resource_Welder . Servers + 1
End If
End If

Return True

At the end of each simulation run, the Simphony simulation system eval-
uates the formula that has been presented. In order to determine whether
results from the simulation run that was just completed are satisfactory,
the maximum wait times are retrieved from the statistics of the HDM(s)Q
and Welder(s)Q waiting les. Results are deemed satisfactory when these
maximum times are equal to or less than one hour. If the results are not
satisfactory, the number of servers for the resource that has an undesirable
waiting time is incremented by one and the next simulation run started.

Termination of Simulation When satisfactory results are generated by


a simulation run, a ag is set that is used to terminate the simulation ex-
periment just as the next simulation run starts. The limit for the Counter
labelled Place holder for simulation termination variable is set to a value
of one, serving as an indication that it is time to terminate the simulation.
This is done within the formula embedded in the Finalize property of the
Execute element labelled Update Resources or Terminate, i.e., presented in
the formula in the server optimization section.
296 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Start
Start
Move
Move to
to the
the next
next run
run index
index

Simphony
Simphony sets
sets run
run
index
index to
to zero
zero

Increase
Increase the
the number
number of
of
welder
welder servers
servers by
by 1.0
1.0
Set
Set the
the limit
limit ofof
Run
Run Index
Index == 0?
0? No
the
the counter
counter to
to 1.0
1.0 Yes

Yes Max.
Max.
Welder(s)Q
Welder(s)Q waiting
waiting
No New
New simulation
simulation time
time >> 1.0
1.0 hr?
hr?
experiment:
experiment: set
set
resource
resource servers
servers to
to one
one

Increase
Increase the
the number
number of
of
HDM
HDM servers
servers by
by 1.0
1.0
No
Yes
Counter
Counter limit
limit == 1.0?
1.0? Yes
Max.
Max.
Yes No HDM(s)Q
HDM(s)Q waiting
waiting
time
time >> 1.0
1.0 hr?
hr?

HaltScenario()
HaltScenario() i.e.
i.e. end
end Execute
Execute the
the
experiment
experiment simulation
simulation run
run No

Max.
Max. HDM(s)Q
HDM(s)Q
Get
Get max.
max. times
times of
of &
& Welder(s)Q
Welder(s)Q waiting
waiting
HDM(s)Q
HDM(s)Q and and time
time <=
<= 1.0
1.0 hr?
hr?
Welder(s)Q
Welder(s)Q
End
End

Figure 7.24: Flow Chart for Optimizing Servers During Simulation


7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 297

Base Model Results


In order to fulll the criterion that requires the maximum waiting time for
any resource to be under one hour, the maximum waiting time statistic is
tracked for HDMs and welders for each simulation run. These values were
used as a basis for changing the number of servers for each resource. Details
of the results generated from a simulation experiment on both these metrics
are presented next.

Maximum Waiting Times Values obtained for the maximum waiting


time for equipment in the queues for HDMs and welders are summarized in
Table 7.13.
The results summarized in Table 7.13 are also represented in the form of
charts, presented in Figures 7.25 and 7.26. It is evident from Table 7.13 that
as the simulation model searches for an optimum solution, there are instances
that arise in which an increase in the number of servers for a given Resource
results in a slightly higher waiting time, contrary to what is expected. This is
attributed to the fact that the simulation model has some stochastic aspects
embedded within it, such as the Probabilistic branches that determine the
type of service required by each equipment instance, and the time between
equipment failure events. These stochastic aspects cause variations in results
obtained from dierent simulation runs, hence the unexpected result.
However, overall, the results show a general reducing trend in waiting
times for the dierent resources, which is indicative of the fact that the
simulation model is moving in the right direction as it is searches for the
number of servers that generate a desired result.

Servers for Resources The simulation model was congured to perform


additional runs until the desired results are obtained. The number of servers
for each resource is the variable changed in the quest for the appropriate wait-
ing time results. These variations in the number of servers are summarized
in Table 7.14.
The charts plotted in Figures 7.27 and 7.28 are generated from the data
presented in Table 7.14. The chart for HDMs shows that its number of
servers increases steadily until they reach their optimum value of 27 servers.
On the other hand, the number of servers for the Welder resource increases
stepwise until it gets to its optimum value of 12.
298 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Table 7.13: Results for Maximum Waiting Time for Service

Run Maximum Waiting Time (hr)


Number HDM(s) Welder(s)
1 241.99 51.08
2 91.81 15.00
3 30.00 10.00
4 36.90 1.66
5 25.05 0.00
6 15.00 5.00
7 15.54 1.19
8 13.82 0.00
9 13.15 0.00
10 10.65 0.03
11 10.00 4.43
12 5.00 5.00
13 5.00 2.53
14 5.00 0.00
15 5.00 0.00
16 5.00 0.00
17 5.72 0.00
18 5.00 0.00
19 5.00 0.00
20 5.00 0.00
21 5.00 0.00
22 4.83 5.00
23 5.00 0.00
24 3.52 0.00
25 1.87 0.00
26 3.90 0.00
27 0.00 5.00
28 0.00 0.00
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 299

Maximum Time for HDMs (hours)

200

150

100

50

0
0 5 10 15 20 25
Simulation Run

Figure 7.25: Maximum Waiting Time for HDM(s) vs. Simulation Run
Maximum Time for Wedlers (hours)

50

40

30

20

10

0
0 5 10 15 20 25
Simulation Run

Figure 7.26: Maximum Waiting Time for Welders vs. Simulation Run
300 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Table 7.14: Variation of the Number of Servers across Runs

Run Number of Servers


Number HDM(s) Welder(s)
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 5
7 7 6
8 8 7
9 9 7
10 10 7
11 11 7
12 12 8
13 13 9
14 14 10
15 15 10
16 16 10
17 17 10
18 18 10
19 19 10
20 20 10
21 21 10
22 22 10
23 23 11
24 24 11
25 25 11
26 26 11
27 27 11
28 27 12
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 301

30
Number of HDM Servers

20

10

0
0 5 10 15 20 25
Simulation Run

Figure 7.27: Number of HDM Servers vs. Simulation Run

12
Number of Welder Servers

10
8
6
4
2
0
0 5 10 15 20 25
Simulation Run

Figure 7.28: Number of Welding Servers vs. Simulation Run


302 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

7.8.7 Embellishment One


The objective of this embellishment is to track useful information about
the reclamation project assuming that the optimal number of servers for
maintenance & repair crews obtained in the base modelling exercise is used
as input for this embellishment. The information includes:

ˆ The number of scheduled and processed maintenance and repair in-


stances for the equipment.

ˆ Statistics for time that equipment spends in service annually.

ˆ Statistics for times that equipment have to wait before service on them
commences.

A layout of the embellished simulation model is shown in Figure 7.29. The


rst change made to the base simulation model that is reected in Figure 7.29
is the renaming of the composite element labelled Resource Optimizer &
Terminator to Collect Statistics in order to reect what it does in this
version of the model.
Also, changes are made to the model layout within the Composite ele-
ment (see Figure 7.30). First, the Create, Execute and Counter modelling
elements used in the layout within this Composite element are renamed to
Create, Collect Service Hours, and Counter, respectively. Then, Visual
Basic code snippets within the formula editors for the Initialize and Ex-
pression properties of the Execute element are removed. The Visual Basic
code snippet within the Initialize property was meant to reset the ag that
was used to terminate simulation of the base model to its default value at the
start of every simulation experiment. This ag is no longer necessary because
simulation terminates after a pre-set number of runs have been executed. On
the other hand, the Visual Basic code snippet that was embedded within the
Expression property of the Execute element was for resetting the number
of servers of all Resource elements back to a value of one at the start of a
new simulation experiment, among other things. This is no longer required,
and therefore, all the code in the expression property is deleted.
Modications are made to the Visual Basic code within the Finalize
property to reect the embellished behaviour of the Execute element, i.e.,
with a label Collect Service Hours. The lines that were responsible for
retrieving the File and Resource elements and adjusting the resource servers
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 303

Figure 7.29: Modied Model (Embellishment One)


304 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

based on waiting times are removed. The only code put in there is for tracking
service hours for equipment and wait times that equipment experience before
they get serviced.
At the top level, the layout of the base model is embellished by the ad-
dition of Composite elements that encapsulate Statistics modelling elements
required to track service hours and wait times. The layout of the Statistics
modelling elements encapsulated within the Composite element that track
service hours for equipment are shown in Figure 7.31.
Global and local attributes within the Simphony simulation system are
setup in such a way that all the values they possess from a previous simu-
lation run are cleared out at the start of a new run and then reset to their
default value. In this simulation exercise, the use of global attributes for the
storage of non-working times for equipment is the obvious choice. However,
global attributes alone will not be sucient because Simphony resets at-
tribute values between simulation runs. Observations stored within statistics
in Simphony persist between simulation runs. Statistic modelling elements
are included in the embellished model for this reason.
During each simulation run, the service hours for equipment are cumu-
lated and stored at their designated global attributes. Then, at the end of
each run, values at each of these global attributes are collected into the appro-
priate Statistic modelling element. This implies that the Statistic modelling
elements will each have observations equal to the total number of simulation
runs executed in a given simulation experiment. Another advantage of using
the Statistics modelling element is that it automatically computes all the
statistics of observations collected to it, which are useful when performing
output analysis.
Details of the attributes designated for tracking the service hours for
equipment in each simulated year are summarized in Table 7.15. For each
type of equipment, a specic global attribute is designated to collect the
cumulative hours spent in repair, maintenance, and both repair and mainte-
nance. This is for the entire eet of a given equipment type.

Figure 7.30: Service Hours Tracking Model


7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 305

Figure 7.31: Service Hours Tracking Statistics

Figure 7.32: Waiting Time Tracking Statistics


306 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

There is no need to designate attributes to collect the average service


hours for each unit in each equipment type because these numbers can be
derived from the values stored in the global attributes presented in Table 7.15
and the number of units for each type of equipment.
The Visual Basic code within the Finalize property of the Execute mod-
elling element (i.e., the Execute embedded within the Composite element la-
belled Statistics (Service Hours)) is enhanced so that the collection of these
observations is possible. The following code snippet demonstrates how ser-
vice hours for trucks are collected into Statistics modelling elements at the
end of each simulation run. The CollectStatistic formula dened in Sim-
phony is used to accomplish this. Similar lines of code are used within the
same formula editor to track the service hours for other types of equipment.
CollectStatistic (" Service Hours ( Trucks )" , GX (3))
CollectStatistic (" Maintenance Hours ( Trucks )" , GX (4))
CollectStatistic (" Repair Hours ( Trucks )" , GX (5))
A Composite modelling element labelled Statistics (Wait Times) in Fig-
ure 7.29 is designated for tracking statistics for the amount of time that
equipment waits before service on them can commence. This composite

Table 7.15: Global Attributes for Tracking Equipment Service Hours

Attribute Designation
GX(0) The total service hours for all shovels in a given year
GX(1) The maintenance hours for all shovels in a given year
GX(2) The repair hours for all shovels in a given year
GX(3) The total service hours for all trucks in a given year
GX(4) The maintenance hours for all trucks in a given year
GX(5) The repair hours for all trucks in a given year
GX(6) The total service hours for all scrapers in a given year
GX(7) The maintenance hours for all scrapers in a given year
GX(8) The repair hours for all scrapers in a given year
GX(9) The total service hours for all graders in a given year
GX(10) The maintenance hours for all graders in a given year
GX(11) The repair hours for all graders in a given year
GX(12) The total service hours for all loaders in a given year
GX(13) The maintenance hours for all loaders in a given year
GX(14) The repair hours for all loaders in a given year
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 307

element encapsulates Statistics modelling elements. The layout for this is


summarized in Figure 7.32.
The collection of observations is executed by formula embedded within
the Incoming Trace and Outgoing Trace properties of Capture or Pre-
empt modelling elements. Formula within the Incoming Trace property
of modelling elements in Simphony is evaluated as an entity is being trans-
ferred into the element while formula in the Outgoing Trace is evaluated
as an entity is transferred out of a modelling element. This conguration
works favorably for purposes of tracking the time that the entity spends at
a Capture element waiting for its resource requirement to be fullled. As
an entity is transferred into Capture elements that are encapsulated within
the Composite elements that emulate maintenance and repair of equipment,
it is time stamped by storing the current simulation time within its LX(1)
attribute. The following Visual Basic code was used to achieve this:
LX (1) = TimeNow
Return Nothing

After the resource requirement for the entity is fullled, it is transferred


out of the Capture modelling element. In order to compute that delay for
each entity, the time that the entity is transferred out of the Capture ele-
ment should be compared to the time that the entity was transferred into
the Capture. This time dierence is evaluated and collected into the appro-
priate Statistics modelling element. The following formula was inserted in
an Outgoing Trace property of a Capture element to achieve this:
CollectStatistic (" Awaiting Maintenance ( Trucks )" , _
TimeNow - LX (1))

Other statements similar to this are used to collect the waiting time
observations for other types of equipment.

Embellishment One Results


User written code, Counter modelling elements and Statistics model elements
were used track observations for the maintenance and repair events, time
spent in service and the time equipment spend queuing as they wait to be
serviced. Results on the time spent in service are presented as point estimates
and condence intervals. Details of these results are summarized in the
following sections.
308 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Table 7.16: Number of Service Requirements for Equipment

Equipment Service Maintenance Repairs


Type Type Scheduled Processed Scheduled Processed
Shovel HDM 111.940 111.940 114.770 114.530
Welder 56.060 56.060 56.250 56.080
Truck HDM 412.270 412.270 488.310 487.680
Welder 91.730 91.730 105.900 105.780
Scraper HDM 93.650 93.650 90.940 90.890
Welder 32.350 32.350 31.660 31.660
Grader HDM 69.350 69.350 66.340 66.210
Welder 14.650 14.650 13.450 13.440
Loader HDM 152.900 152.900 128.980 128.750
Welder 57.100 57.100 47.750 47.710

Table 7.17: Statistics on Annual Service Hours for Equipment

Equipment Type Metric Mean Standard Deviation


Maintenance Hours 840.000 0.000
Shovel Repair Hours 2,047.647 10.834
Total Service Hours 2,887.647 10.834
Maintenance Hours 2,522.130 4.079
Truck Repair Hours 7,121.862 25.299
Total Service Hours 9,643.991 25.673
Maintenance Hours 630.000 0.000
Scraper Repair Hours 1,470.478 9.683
Total Service Hours 2,100.478 9.683
Maintenance Hours 420.000 0.000
Grader Repair Hours 955.827 17.351
Total Service Hours 1,375.827 17.351
Maintenance Hours 1,050.050 0.497
Loader Repair Hours 2,117.669 12.536
Total Service Hours 3,167.719 12.572
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 309

Number of Maintenance Instances

400 HDM Maintenance


Welder Maintenance

300

200

100

0
Shovels Trucks Scrapers Graders Loaders

Figure 7.33: Average Number of Maintenance Services by Equipment Type

HDM Repair
Number of Repair Instances

400 Welder Repair

300

200

100

0
Shovels Trucks Scrapers Graders Loaders

Figure 7.34: Average Number of Repairs by Equipment Type


310 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Service Events The simulation model is setup to facilitate tracking of the


number of simulation events that are scheduled and those that are processed
specically for the servicing of equipment. The presence of Counter mod-
elling elements before and after the Task elements that model equipment
maintenance and repair make this possible. A summary of the results on
these metrics are presented in Table 7.16. These values are obtained from
the mean of the last count property of the appropriate Counter modelling
element.
The results presented in Table 7.16 show that for the most part, almost all
equipment that required maintenance and repair got the service required in a
given year via comparison of the scheduled and processed simulation events
related to each. The data on scheduled maintenance and repairs that is
summarized in Table 7.16 is plotted on charts to facilitate its interpretation.
Figures 7.33 and 7.34 summarize the charts generated.
In general, there appear to be as many repairs as there are maintenance
requirements for all equipment on this project. The charts also reveal higher
number of maintenance and repairs for trucks that any other equipment type.
Graders experience the lowest number of service requirements in a year.

Statistics (Service Hours) When equipment fails or requires service, it


cannot be used to produce work on the project. The embellished simulation
model is setup in such a way that it tracks the total time that equipment
takes in service. Details of these numbers are summarized in Table 7.17.
Results summarized in Table 7.17 indicate that on average, the eet of
trucks takes the longest time in service annually compared to other equipment
types. Graders spend the least time in service in any given year. These results
can be attributed to the fact that trucks have the largest eet size, i.e., 12
units, while graders have the smallest eet size, i.e., 2 units.

Condence Intervals (Service Hours) Ranges are computed for the


mean hours of service for the equipment based on a standard formula for
calculating condence intervals of mean values. A sample calculation is pre-
sented that illustrates how the condence interval for the total hours of ser-
vice for trucks is computed. One hundred simulation runs are executed in
each experiment, so n = 100, and a 5% level of signicance (α = 0.05) is
assumed. Next, we note that the standard deviation reported by Simphony
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 311

and shown in Table 7.17 is the population standard deviation, and our cal-
culation requires the sample standard deviation. We can convert from one
to the other as follows:
r r
n 100
S= σ2 = × 25.6732 ≈ 25.802,
n−1 99

and then proceed to calculate the condence interval:

S 25.802
X̄ ± t(1−α/2),(n−1) √ = 9, 643.991 ± t0.975,99 √
n 100
25.802
≈ 9, 643.991 ± 1.984 ×
10
≈ 9, 643.991 ± 5.119.

The condence intervals for the service hours for the dierent types of equip-
ment are summarized in Table 7.18.

Waiting Times The delays associated with equipment waiting to get ser-
viced are tracked using Statistics modelling elements. Results obtained are
summarized in Table 7.19. These results indicate that trucks experience
the longest wait times compared to any other equipment types. This result
is consistent with that which indicates that trucks take the longest time in
service for any given year.

7.8.8 Embellishment Two


The results from the rst embellishment indicate that overall, trucks require
more service time compared to all other types of equipment. After reviewing
this result, the company would like to investigate the viability of implement-
ing a policy in which the servicing of trucks is prioritized over other types
of equipment. This policy also requires that if a certain type of service is
required by a truck, all crews capable of providing that service are preempted
and engaged in xing that one truck so that it takes the least time in ser-
vice. If a truck requiring service arrives and the crews that it requires are
engaged on another truck, the truck that just arrived queues until service is
completed on the truck being worked on. Other trucks arriving at a point in
312 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Table 7.18: Condence Intervals for the Annual Service Hours for Equipment

Equipment Type Metric Condence Interval


Maintenance Hours 840.000 ± 0.000
Shovel Repair Hours 2, 047.647 ± 2.160
Total Service Hours 2, 887.647 ± 2.160
Maintenance Hours 2, 522.130 ± 0.813
Truck Repair Hours 7, 121.862 ± 5.045
Total Service Hours 9, 643.991 ± 5.119
Maintenance Hours 630.000 ± 0.000
Scraper Repair Hours 1, 470.478 ± 1.931
Total Service Hours 2, 100.478 ± 1.931
Maintenance Hours 420.000 ± 0.000
Grader Repair Hours 955.827 ± 3.460
Total Service Hours 1, 375.827 ± 3.460
Maintenance Hours 1, 050.050 ± 0.099
Loader Repair Hours 2, 117.669 ± 2.500
Total Service Hours 3, 167.719 ± 2.507

Table 7.19: Statistics for the Wait Times for Service

Equipment Service Waiting Time (hrs)


Type Type Mean Standard Deviation Maximum
Shovel Maintenance 0.000 0.000 0.000
Repair 0.002 0.007 0.050
Truck Maintenance 0.004 0.008 0.050
Repair 0.001 0.002 0.012
Scraper Maintenance 0.000 0.000 0.000
Repair 0.002 0.007 0.037
Grader Maintenance 0.000 0.000 0.000
Repair 0.000 0.002 0.018
Loader Maintenance 0.000 0.002 0.024
Repair 0.001 0.004 0.028
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 313

time that the queue length is greater than one are served on a First-In-First-
Out (FIFO) basis. It is assumed that implementing this policy makes more
resources available for the service of each truck hence the time needed to per-
form the service is reduced to a quarter the original length. This translates
into a maintenance time of 1.25 hours and a repair time of 3.0 hours for all
types of service.
It is reasonable to test this policy and assess the possible benets within
a virtual simulation environment before implementing it in the eld. This
is the objective of this embellishment. In order to achieve this, the simula-
tion model developed for embellishment one is modied to account for these
specications.
The model layouts within the Composite elements that emulate the main-
tenance and repair of trucks are modied to account for specications in the
policy. First, the modelling elements between the Counter elements that are
after the ProbabilisticBranch element (that decides the type of service) and
the Execute element (that updates the non-working time) are encapsulated
within new Composite elements. Other additional modelling elements are
added to these to embellish the logic modelled within these new Composite
elements. At a top level, the new model layout within the Composite element
that emulates truck maintenance is shown in Figure 7.35. This model lay-
out is similar to that within the Composite modelling element that emulates
truck repairs.
Implementing these modications to the model in order to accommodate
specications in the described policy can result in logical errors such as dead-
lock, especially in instances where the queue for trucks requiring service is
not emptied appropriately. To avoid this deadlock situation, two resource
elements, each with a single server and le, are added at the top most level
of the simulation model to emulate a constraint that ensures that only one
truck can have all HDMs or all the welders for a given service. Figure 7.36
presents the layout of these Resource elements and their respective les.
Composite modelling elements are introduced into the model to encap-
sulate the elements that are required for each truck to preempt all HDMs or
welders. The model layout shown in Figure 7.37 is used to achieve preemp-
tion of HDMs for truck maintenance. This model layout is similar to that
used within the Composite for the truck repair.
The Capture modelling elements used within the maintenance and repair
Composite elements for the trucks are replaced with Preempt modelling ele-
ments. The default priority for each Preempt is now changed so that trucks
314 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

Figure 7.35: Modied Truck Maintenance Model (Embellishment Two)

Figure 7.36: Resources and Files to Permit Safe Preemption of Service Crews

Figure 7.37: Modied Model for Truck HDM Maintenance Service (Embel-
lishment Two)
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 315

requiring service are assigned a higher priority than other types of trucks.
The Preempt element is used to ensure that servers of a Resource that are
engaged in the service of other equipment types get assigned to the truck that
requires them, without delay. The Preempt modelling element in Simphony
has a restriction that does not allow more than one server of the targeted
resource to be assigned to a preempting entity. In order to overcome this
restriction, the model layout was congured in such a way that truck entities
keep cycling through a loop until the entity has preempted all the entities
that it requires. This loop is comprised of the Preempt element (labelled
Preempt an HDM in Figure 7.37) and a ConditionalBranch modelling ele-
ment (labelled All HDMs Preempted? in Figure 7.37).
A Visual Basic code snippet is embedded within the formula editor of
its Condition property that checks to make sure all the servers have been
preempted by the truck entity. In order for this check to work well, the
number of servers preempted by a truck entity at any point in time must
be locally stored. The LN(0) attribute of the truck entity is designated
for this purpose. Given that each time a truck entity is transferred into this
ConditionalBranch modelling element, it will have a higher number of servers
preempted than it previously had, a statement is included in the code snippet
that increments the LN(0) attribute by one. The following code is used to
perform this increment and to check the fulllment of the preemption of all
required servers. In this code, it is assumed that the truck entity requires
HDMs for service.
LN (0) = LN (0) + 1
If LN (0) = ServersAvailable (" HDM ( s )") + _
ServersInUse (" HDM ( s )") Then
Return True
Else
Return False
End If

After all the servers for a given resource have been preempted, the truck
entity is routed out through the True output port of the ConditionalBranch
modelling element labelled All HDMs Preempted? and proceeds into a Task
modelling element labelled Maintain Truck that emulates the maintenance
work being done on the truck.
After the truck entity is released from the Task element, it is routed into
modelling elements that release all the resources assigned to the entity. It is
rst transferred into a Release element (release labelled Release Preempted
316 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

HDMs in Figure 7.37) that frees all the resource servers that the entity
preempted. Information about the number of servers to free is retrieved
from the LN(0) attribute of the truck entity, and then LN(0) is reset to its
default value, i.e., zero. The following formula is inserted into the Servers
property of this Release element to achieve this behaviour.
Dim X As Integer = LN (0)
LN (0) = 0
Return X

Thereafter, the truck entity is transferred into another Release modelling


element labelled Release Access (HDMs) that frees the server for the re-
source that restricts the preemption of all servers of the targeted resource.
The truck entity is then routed out of the Composite element and scheduled
for its next work cycle.

Embellishment Two Results


Service Hours It is expected that the hours that equipment spend in
service vary with the implementation of the policy that prioritizes the service
of trucks over other equipment types. Table 7.20 summarizes average values
for hours spent in service annually along with their standard deviations.
These values are read at the end of simulation from the Statistics modelling
elements that were designated to track these values.

Waiting Times The redenition of priorities used to service equipment


queued results in changes in the delays experienced before equipment get
serviced. New values for wait times for the dierent equipment are summa-
rized in Table 7.21. These values are read at the end of simulation from
Statistics modelling elements that are designated to track time that equip-
ment wait before service.
It is evident that average and maximum wait times for all the equipment
increase relative to their previous values. However, they remain below the one
hour threshold, with the exception of the waiting time for truck maintenance
that is just over the one hour mark.

Viability of the Policy Waiting times are disregarded as a criterion for


evaluating the viability of the policy given that they still meet initial require-
ments. Consequently, the average time that equipment takes in service in a
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 317

Table 7.20: Statistics on Annual Service Hours for Equipment

Equipment Type Metric Mean Standard Deviation


Maintenance Hours 1,080.718 49.746
Shovel Repair Hours 2,494.965 54.277
Total Service Hours 3,575.683 80.130
Maintenance Hours 1,019.752 46.513
Truck Repair Hours 2,132.832 47.935
Total Service Hours 3,152.584 69.319
Maintenance Hours 824.345 41.109
Scraper Repair Hours 1,835.724 36.711
Total Service Hours 2,660.070 53.391
Maintenance Hours 562.477 29.977
Grader Repair Hours 1,229.233 39.230
Total Service Hours 1,791.710 51.595
Maintenance Hours 1,371.460 63.257
Loader Repair Hours 2,703.619 62.021
Total Service Hours 4,075.079 94.081

Table 7.21: Statistics for the Wait Times for Service

Equipment Service Waiting Time (hrs)


Type Type Mean Standard Deviation Maximum
Shovel Maintenance 0.203 0.044 0.332
Repair 0.219 0.040 0.320
Truck Maintenance 0.795 0.098 1.089
Repair 0.453 0.082 0.673
Scraper Maintenance 0.221 0.057 0.441
Repair 0.236 0.057 0.344
Grader Maintenance 0.231 0.069 0.403
Repair 0.260 0.073 0.456
Loader Maintenance 0.219 0.056 0.467
Repair 0.235 0.045 0.354
318 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION

given year is used as a test. Table 7.22 summarizes average values for this
metric for embellishment one and embellishment two models.
Values summarized in Table 7.22 indicate that implementing the policy in
which the service of trucks is prioritized over other equipment types results
in a 20.45% reduction in the average time that all equipment spend in service
annually. This means that equipment will spend more time in production
resulting in better performance on the reclamation project. This overall
reduction arises from a reduction in annual service hours for trucks.

Conclusion
Comparison of the results obtained in embellishment one and those from
this embellishment conrm the viability of implementing this policy and its
potential benets for the project.
This embellishment, i.e., embellishment two, also demonstrated how dis-
crete event simulation can be used to support decision making processes,
eliminating the need to follow gut feelings or experimenting with the real
system to know whether a given policy will yield good results.

Table 7.22: Comparison of Service Hours (Embellishment One vs. Two)

Equipment Annual Service Hours


Type Embellishment One Embellishment Two
Shovel 2,887.65 3,575.68
Truck 9,643.99 3,152.58
Scraper 2,100.48 2,660.07
Grader 1,375.83 1,791.71
Loader 3,167.72 4,075.08
Appendix A
Simphony.NET User's Guide

A.1 Simphony.NET Overview


A.1.1 Introduction
Simphony.NET represents an evolution in computer simulation and its inte-
gration into the construction industry. It is the result of over ten years of
research in the application of simulation-based planning techniques in the
industry. Simphony.NET consists of a foundation library, as well as special-
ized computer programs that allow for the development of new construction
simulation tools in an ecient manner.
Simphony.NET's promise is that, as a user, you do not need to possess
any simulation background in order to take advantage of the benets of
simulation. When you build models, you have access to a domain-specic
set of building blocks, called modelling elements. This means that you
create a simulation model using a library of modelling elements with names
that you can relate to. An earth-moving model, for example, is built with
modelling elements such as source locations, road segments, intersections and
excavators. Similarly, a model of an aggregate production plant is built by
selecting from a list of modelling elements that includes jaw crushers, screens
and product piles.
There is a large library of modelling elements that are available with the
base distribution of Simphony.NET. If any of the existing modelling elements
are not exible enough to meet your modelling needs, or if you need new
modelling elements to be developed for dierent construction operations,
then you will need to contact a developer to extend your library.

319
320 APPENDIX A. SIMPHONY.NET USER'S GUIDE

For more information regarding the individual modelling elements that


are available in the library and their functions, consult the appropriate Spe-
cial Purpose Simulation template manual supplied by the developer.

A.1.2 Basic Features


Modular and Hierarchical Modelling
The main model building block in Simphony.NET is the modelling element.
The user builds a simulation model in Simphony.NET by creating instances of
modelling elements that resemble real components of a construction system,
and linking them together in ways similar to those that exist in a real system.
For representation of complex and large construction projects, Simphony
provides a hierarchical modelling feature. A project can be represented by
an abstracted model at the higher level that contains a limited number of
modelling elements and relations. At a lower level, each of these elements
can have its own child model, which represents the sub-system working inside
that element. The number of these hierarchical levels is only limited by the
computer system's resources.

General Purpose vs. Special Purpose Simulation (SPS)


Simphony.NET supports both general purpose modelling constructs (e.g.,
CYCLONE) which can be used to model dierent construction processes, as
well as specialized templates for specic construction methods (e.g., earth-
moving and aggregate production), which are suitable for users with little
simulation background.

Integration of SPS Tools


Simphony.NET allows the extension of specialized SPS tools through the
construction of models based on several templates. For example, this feature
can be used to build a paving project that requires, in some parts, a special
gradation of aggregate provided by an aggregate production plant.

Custom Output Results


Simphony.NET modelling elements can generate custom output results in
the form of tables and graphs.
A.2. GETTING STARTED 321

A.2 Getting Started


A.2.1 Installation
System Requirements
Simphony.NET requires the following for minimal installation:

ˆ Microsoft Windows XP, Windows 7 (x86 or x64), or Windows 8 (x86


or x64).

ˆ 1.6 GHz or faster processor.

ˆ 1 GB of RAM (1.5 GB if running on a virtual machine).

ˆ 50 MB of available hard disk space.

ˆ DirectX 9-capable video card that runs at 1024×768 or higher display


resolution.

Installation Procedures
To install Simphony.NET, run the MSI le from the distribution set and
follow the instructions.

Simphony.NET Files and Folders


The installation program will create a folder called Simphony.NET 4.0 at
the location specied during installation.
The Simphony.NET folder will contain the following folders and les:

\Simphony.NET 4.0\Templates\: the template folder, which contains


all of the template les (*.dll).

\Simphony.NET 4.0\Components\: contains les required for tem-


plate development (*.dll).

Simphony models have the le extension *.sim. Simphony can be started
by:

1. Running the executable \Simphony.NET 4.0\Simphony.UI.exe;


322 APPENDIX A. SIMPHONY.NET USER'S GUIDE

2. Start > All Programs > Simphony.NET 4.0 > Modelling Environment;
or

3. Double clicking a *.sim le.

A.2.2 The Main User Interface


The main user interface of Simphony.NET is shown in Figure A.1:

Modelling Surface
The Modelling Surface is the main workspace for building simulation models.
Modelling elements are placed on the Modelling Surface by dragging them
from the Template Palette. The Modelling Surface can accommodate multi-
ple tabs thus allowing dierent portions of the model to be quickly accessed.

Figure A.1: The Main Simphony.NET User Interface


A.2. GETTING STARTED 323

The magnication of the Modelling Surface can be controlled by using the


slider bar at the bottom-right corner of the user interface.

Model Explorer
The Model Explorer displays a navigation tree representing the structure of
the current simulation project. The root of the tree is always the model
itself. Under the model, one or more scenarios can be created. Each scenario
contains slightly dierent versions of the same simulation model that can
be compared after the simulation has been run. For example, two scenarios
could contain the same model, but one is congured for crews working an 8
hour shift and the other for a 10 hour shift. Underneath the scenarios will be
the hierarchical structure of the model. Double clicking on an entry in the
tree view will bring up the corresponding portion in the Modelling Surface.

Template Palette
The Template Palette displays a list of all elements available in the mod-
elling element library which can be used to construct new projects. These
elements are categorized by the templates to which they belong or folders
within the templates. Users can add special purpose templates by select-
ing the Add Template item under the File menu and looking in the \Sim-
phony.NET 4.0\Templates\ folder.

Property Grid
The Property Grid displays the properties of the scenario or modelling el-
ement selected on the Modelling Surface. Users can specify the name and
input parameters of scenario or modelling elements. Some of the input pa-
rameters can be dened using user dened code either by Visual Basic or
C#. The associated output for each scenario or modelling element is also
displayed as well as the relevant statistics. The detailed property information
of each element can be found in the associated template manual.

Trace/Debug/Error Windows
This window consists of three tabs: The Trace tab display's trace messages
generate by the model during simulation; the Debug tab displays similar
324 APPENDIX A. SIMPHONY.NET USER'S GUIDE

messages intended to assist with debugging a model; and the Errors tab
displays any integrity errors/warnings that may be present in the model.

Menu Bar & Tool Bar


The menu bar contains six menus: File, Edit, Simulation, Results, View, and
Help.

File Menu
Item Description
New (Ctrl+N) Creates a new simulation model.
Open. . . (Ctrl+O) Opens an existing simulation model.
Save (Ctrl+S) Saves the current simulation model.
Save As. . . (F12) Save the current model under a dierent le name.
Add Template. . . Adds a template to the Template Palette.
Remove Template Removes the selected template from the Template Palette.
Add Scenario Adds a new scenario to the model.
Remove Scenario Removes the current scenario from the model.
Print Preview. . . Previews printing of the Modelling Surface.
Print. . . (Ctrl+P) Prints the Modelling Surface.
Recent Files Opens a recently edited simulation model.
Exit (Alt+F4) Closes Simphony.

Edit Menu
Item Description
Undo (Ctrl+Z) Undoes the last action.
Redo (Ctrl+Y) Redoes the last action.
Cut (Ctrl+X) Cuts the selected element(s).
Copy (Ctrl+C) Copies the selected element(s).
Paste (Ctrl+V) Pastes copied element(s) onto the Modelling Surface.
Delete (Del) Deletes the selected element(s).
Select All (Ctrl+A) Selects all elements on the Modelling Surface.
Copy Modelling Surface Copies the entire Modelling Surface as an image.

Simulation Menu
Item Description
Run (F5) Executes the simulation model.
Pause Pauses execution of the simulation model.
Halt Terminates execution of the simulation model.
Check (F7) Performs an integrity check of the simulation model.
A.3. DEVELOPING SIMULATION MODELS 325

Results Menu
Item Description
Statistics. . . Displays the statistics report.
Costs. . . Displays the costs report.
Emissions. . . Displays the emissions report.

View Menu
Item Description
Refresh (Ctrl+R) Redraws the Modelling Surface.
Organize Attempts to aesthetically organize the Modelling Surface.
Zoom. . . Opens the Zoom dialog.
Zoom to 100% Zooms the modelling surface to 100%.
Zoom to Selection Zooms the modelling surface to the selected elements(s).
Restore Default Layout Restores default layout of user interface elements.
Options. . . Opens the Options dialog.

Help Menu
Item Description
About. . . Opens the About dialog.

A.3 Developing Simulation Models


The following procedures describe the typical steps followed in the develop-
ment and simulation of a Simphony.NET model.

A.3.1 Dene a Scenario


Simphony.NET provides an environment that supports multiple scenarios.
Each scenario within a simulation model can be used to compare dierent
simulation options or break down the project into smaller models. When
multiple scenarios are used, Simphony will execute each scenario sequentially
until all scenarios have been simulated. A given scenario may be executed
multiple times (Monte Carlo simulation) if multiple runs are specied. Sce-
narios can be added to a model by selecting the Add Scenario item from the
File menu. A scenario has the following properties which can be viewed in
the Property Grid when the scenario has been selected in the Model Explorer
window:
326 APPENDIX A. SIMPHONY.NET USER'S GUIDE

Grid
GridSize: A pair of numbers indicating the horizontal/vertical distance be-
tween grid lines.

ShowGrid: A boolean value indicating whether or not the grid should be


displayed on the Modelling Surface.

ShowRulers: A boolean value indicating whether or not the rulers should


be displayed on the Modelling Surface.

SnapToGrid: A boolean value indicating whether or not modelling ele-


ments should snap to the grid when they are placed/moved on the
Modelling Surface.

Inputs
(Name): The name of the scenario.

Enabled: A boolean value indicating whether or not the scenario should be


simulated when the model is executed.

MaxTime: The maximum permissible simulation time: once this time is


reached a run will be terminated.

RunCount: The number of times the scenario should be executed.

Seed: The seed value for the pseudo-random number generator. If this is set
to zero, the pseudo-random number generator will be seeded using the
system time, which will result in a dierent sequence of pseudo-random
numbers each time the scenario is executed. Setting this to a non-zero
value will result in the same sequence being generated each time the
scenario is executed, i.e., the results will be identical each time.

StartDate: The date at which simulation will begin: simulation time zero
will correspond to midnight on this date.

TimeUnit: The time unit one unit of simulation time corresponds to.
A.3. DEVELOPING SIMULATION MODELS 327

Reports

Costs: Provides access to the costs report.

Emissions: Provides access to the emissions report.

Statistics: Provides access to the statistics report.

Statistics

TerminationTime: A numeric statistic that contains termination time of


each run.

A.3.2 Building a Model


In Simphony, a simulation model is a collection of modelling elements con-
nected to each other by relationships. To add a modelling element to a
model, drag an element of the desired type from the Template Palette to
the Modelling Surface as shown in Figure A.2. Modelling elements can be
deleted by selecting the element on the Modelling Surface and then selecting
the Delete item from the Edit menu.

Figure A.2: Adding a Modelling Element


328 APPENDIX A. SIMPHONY.NET USER'S GUIDE

The Property Grid


Once placed on the Modelling Surface, the properties of a modelling element
(or relationship) can be viewed in the Property Grid by selecting it on the
Modelling Surface. In Simphony, the Property Grid groups similar proper-
ties into categories. In addition, some properties provide additional options
via a pop window, which can be accessed by clicking on a builder button.
The Property Grid also provides a description pane that provides additional
information about the purpose of a property. All of these features of the
Property Grid are shown in Figure A.3.

Design The Design category contains the name of the modelling element
together with a description.

Inputs This category contains properties that aect the simulation be-
haviour of the modelling element. For example, a modelling element repre-

Figure A.3: A Modelling Element Selected in the Property Grid


A.3. DEVELOPING SIMULATION MODELS 329

senting an activity might have a property named Duration that appears


under the Inputs category.

Layout The Layout category contains properties that specify the location,
size, and colour of a modelling element.

Outputs The Outputs category contains properties that display the results
of a simulation. They dier from statistics (below) in that they only display
a single value for the most recent run.

Statistics The Statistics category contains properties that represent the


statistical results of the simulation. For example, Counter element shows
statistics for interarrival time and productivity while Resource element dis-
plays statistics for utilization.

Connection Points
Most modelling elements in Simphony will have connection points that allow
entities to ow into and out of the element. Connection points at which
entities ow into a modelling element will point towards the element, while
connection points at which entities leave a modelling element will point away
from it. The orientation of a modelling element's connection points can
be changed by right-clicking on the element and selecting either the Rotate

Figure A.4: Changing the Orientation of Connection Points


330 APPENDIX A. SIMPHONY.NET USER'S GUIDE

Ports 180

, the Rotate Ports 90◦ Clockwise, or the Rotate Ports 90◦ Counter-
Clockwise item from the context menu as shown in Figure A.4.

Relationships
Relationships dene the direction that entities ow through a model. Rela-
tionships can be created between modelling elements by dragging the output
point of one element to the input point of another as shown in Figure A.5.

Figure A.5: Creating a Relationship

Relationships can be deleted by selecting the relationship on the Mod-


elling Surface and then selecting the Delete item from the Edit menu.
Pivot points can be added to relationships which allow then to bend. You
can do this by right-clicking on the relationship and selecting the Add Pivot
item from the context menu as shown in Figure A.6 below. Pivot points can
be deleted by selecting the Delete Pivot item. Once a pivot point has been
created, it can be moved on the Modelling Surface causing the relationship
to bend.

Figure A.6: Adding a Pivot Point


A.3. DEVELOPING SIMULATION MODELS 331

A.3.3 Executing a Model


To execute a simulation model, select the Run item from the Simulation
menu. During simulation, execution can be paused or stopped entirely by
selecting either the Pause or Halt item from the Simulation menu. After
simulation is complete, Simphony will report the amount of (walk-clock)
time it took to execute the simulation together with the reason that the
most recent simulation run terminated.
Before executing a model, Simphony will perform an integrity check of
the model to ensure that it is in a state in which simulation can proceed. If
there are any errors or warnings they will be reported in the Error Window
as shown in Figure A.7.

Figure A.7: Errors and Warnings in the Errors Window

If a model contains errors, Simphony will not allow it to be simulated; if


it contains just warnings, Simphony will inform you that the model contains
warnings and ask if you wish to continue with simulation. Double-clicking on
an error or warning in the Errors Window will select the oending element
on the Modelling Surface. It is possible to perform an integrity check of a
model without actually attempting to execute it by selecting the Check item
from the Simulation menu.

A.3.4 Examining Results


Once simulation is complete, Simphony provides a number of ways of viewing
the results.
332 APPENDIX A. SIMPHONY.NET USER'S GUIDE

Summary Reports
A high-level view of the simulation results can be accessed from the three
reports available under the Results menu:
ˆ The Statistics report summarizes all of the statistics collected dur-
ing simulation. It breaks the statistics into ve groups: non-intrinsic
statistics, intrinsic statistics, counters, resources, and waiting les (i.e.,
queues).
ˆ The Costs report summarizes all cost information that was collected
during simulation. This report is broken down into the various cost
categories specied when the costs were collected.
ˆ The Emissions report summarizes all emission information that was
collected during simulation.

Statistics Report
Date: Sunday, January 25, 2015
Project: Model
Scenario: Scenario1
Run: 1 of 1

Non-Intrinsic Statistics
Element Mean Standard Observation Minimum Maximum
Name Value Deviation Count Value Value
Scenario1 (Termination Time) 60,090.864 0.000 1.000 60,090.864 60,090.864

Counters
Element Final Overall Average First Last
Name Count Productivity Interarrival Arrival Arrival
Chainage 1,227.000 0.020 48.989 30.000 60,090.864

Resources
Element Average Standard Maximum Current Current
Name Utilization Deviation Utilization Utilization Capacity
Crane 42.8 % 49.5 % 100.0 % 0.0 % 1.000
TBM 61.3 % 48.7 % 100.0 % 100.0 % 1.000
Track 100.0 % 0.0 % 100.0 % 100.0 % 1.000

Waiting Files
Element Average Standard Maximum Current Average
Name Length Deviation Length Length Wait Time
CraneQ 0.000 0.000 1.000 0.000 0.000
TrackQ 0.572 0.495 1.000 1.000 27.969
TrainQ 0.000 0.000 1.000 0.000 0.000

Figure A.8: Example Statistics Report


A.3. DEVELOPING SIMULATION MODELS 333

An example of a Statistics report is shown in Figure A.8. The other two


reports are similar.

Trace and Debug Output


The Trace and Debug Windows allow modelling elements to display messages
as simulation proceeds. Messages of interest when examining the results of a
simulation will normally be found in the Trace Window; the Debug Window
is intended to display messages that assist with validation of the model. An
example of trace output is shown below in Figure A.9. Debug output is
similar.

Figure A.9: Example Trace Output

Both the Trace and Debug Windows have a toolbar at the top. The rst
toolbar item on both windows is a combo box that allows you to enable or
disable trace (or debug) output. By default, trace output is enabled and
debug output is disabled. When utilizing trace (or debug) output, keep in
mind that it will have a great impact on simulation performance. Running
a sophisticated simulation model will take considerably longer if trace (or
debug) output is enabled.
The next toolbar item on the Trace Window is a combo box that species
which trace categories should be displayed. Whenever a trace message is
generated it can (optionally) be associated with a trace category. The combo
box allows you to lter the trace output to show only a particular category
of interest. Note that there is no such combo box on the Debug Window as
debug messages are simply trace messages associated with the special Debug
category.
334 APPENDIX A. SIMPHONY.NET USER'S GUIDE

Finally, on both the Trace and Debug Windows, the toolbar provides
buttons that allow you to save the trace (or debug) output to a text le,
send it to a printer, or copy it to the clipboard.

Output Properties
Many modelling elements have output properties that can be viewed in the
Property Grid under the Outputs category. When reviewing output prop-
erties of scenarios congured for multiple runs, keep in mind that the value
displayed is for the last run that was executed.

Statistical Properties
Many modelling elements have statistical properties that can be viewed in
the Property Grid under the Statistics category. Unlike output properties,
statistical properties can summarize information across multiple runs. In the

Figure A.10: Statistical Properties and the Numeric Statistic Graph


A.3. DEVELOPING SIMULATION MODELS 335

Property Grid, statistical properties can be expanded to show such informa-


tion as the mean, standard deviation, count, minimum, and maximum of the
observations. You can use the Run sub-property of a statistic to specify the
run the information should be displayed for. Alternately, the builder button
can be clicked to open a popup window as shown in Figure A.10.
This popup window will display various charts depending on the type
of statistic. In general, non-intrinsic statistics will provide a histogram and
cumulative distribution chart, while intrinsic statistics will provide a time
chart. As in the Property Grid, you can use the Run sub-property to specify
the run the chart should be generated for.
The popup window also provides buttons on its toolbar that allow you
to modify certain parts of the chart, copy the chart to the clipboard, or save
the chart (or the underlying data) to a le.
336 APPENDIX A. SIMPHONY.NET USER'S GUIDE
Appendix B
Visual Basic Introduction
Visual Basic is a programming language developed by Microsoft, and is based
on the BASIC (Beginner's All-purpose Symbolic Instruction Code) program-
ming language originally developed by Kemeny and Kurtz (1968).
Visual Basic is available as part of Microsoft's Visual Studio development
environment. However, for the purposes of this introduction we will simply
make use of formulas in Simphony's General Template to demonstrate fea-
tures of the language. In what follows, we will make use of the model shown
in Figure B.1. This model consists of three elements: the rst is a Create
element congured to create a single entity at simulation time zero; next
comes an Execute element that will run the Visual Basic code that we write
when the entity passes through; nally, the entity is destroyed by a Destroy
element.

Figure B.1: Simple Model for Demonstrating Visual Basic

B.1 Trace Output


The rst thing we need to discuss is how to generate some kind of output
from our Visual Basic code so we can verify that it is doing what we think

337
338 APPENDIX B. VISUAL BASIC INTRODUCTION

it is doing. For the purposes of this tutorial we will use Simphony's trace
window for output. The command needed to write to the trace window is
called TraceLine. To illustrate, here is the code for the traditional Hello,
World! program as the Expression formula of our Execute element:
Public Partial Class Formulas
Public Shared Function Formula ( . . . ) As System . Boolean
TraceLine (" Hello , World !")
Return True
End Function
End Class

Let's examine this formula line by line. The rst and last lines dene a
class that will contain not only this formula, but all other formulas used by
a model. These two lines will be present in every formula you write, and
should never be modied. All of your Visual Basic code will be placed inside
this class denition. Next, the second and fth lines dene the function that
represents our formula. As with the class denition, these two lines will be
present in every formula you write, and should never be modied. Unlike the
class denition, however, they will vary between formulas. In particular, the
return type of the formula can change. The return type of the formula above
is System.Boolean, which means a boolean true/false value. Henceforth, we
will omit these four lines from our code listings.
The most important lines for us are the third and fourth. The third
line is the call to the TraceLine command, which takes a single parameter
specifying what should be written to the Trace Window. In this case we
are specifying the text string Hello, World!. The fourth line begins with a
Return statement, which is a special statement in Visual Basic that indicates

Figure B.2: Output from `Hello, World! Formula


B.2. COMMENTS 339

that what follows is the return value of the formula, and that processing
of the formula is over. In this case we are returning the value True, which
indicates to the Execute element that the entity being processed should be
passed on to subsequent modelling elements. All of the formulas that we
write in this chapter will end with such a Return statement.
When the model is run inside Simphony, the appropriate trace output is
generated, as shown in Figure B.2.

B.2 Comments
All programming languages support comments that allow you to add text
to your code that makes it easier to understand. In Visual Basic, comments
begin with a single quotation mark (\textquotesingle), after which everything
until the end of the line is considered a comment and is ignored by Visual
Basic. Here is the Hello, World! program with a comment:
' Write the phrase " Hello , World !" to trace output .
TraceLine (" Hello , World !")
Return True
We will use comments frequently in this chapter to make our examples easier
to understand.

B.3 Variables
In Visual Basic, variables are the tools that allow you to perform calculations.
They correspond in many ways to the cells of a spreadsheet. Every variable
has both a name and a data type. Variable names must begin with a letter,
and may thereafter contain letters, digits, and (infrequently) the underscore
character. The most commonly used data types for variables are shown in
the Table B.1.
Before they can be used, variables must be declared using the Dim keyword
(which is short for Dimension). The syntax for the Dim keyword is:
Dim < Name > As < Data Type >
Normally, when a variable is declared it is initialized to a specic value using
the assignment operator (=); if this is not done, the variable will have its
default value as shown in Table B.1. Here are some examples of declaring
variables:
340 APPENDIX B. VISUAL BASIC INTRODUCTION

Table B.1: Common Visual Basic Data Types

Data Description Examples Default


Type Value
Boolean Boolean true/false value True, False False
Integer 32-bit integer -3, 0, 1, 243 0
Double 64-bit oating point -10.1, 2.5E-10, 3.14159 0
String Text string Hello, 10 Nothing

Table B.2: Visual Basic Operators

Data Operators
Type
Boolean Not (logical negation), And (logical and), Or (logical in-
clusive or), Xor (logical exclusive or)
Integer - (negation), + (addition), - (subtraction), * (multiplica-
tion), / (division), Mod (modulus),  (exponent)
Double - (negation), + (addition), - (subtraction), * (multiplica-
tion), / (division), Mod (modulus),  (exponent)
String & (concatenation)

Table B.3: Visual Basic Comparison Operators

Data Operators
Type
Boolean = (equality), <> (inequality)
Integer = (equality), <> (inequality), < (less than), > (greater
than), <= (less than or equal to), >= (greater than or
equal to)
Double = (equality), <> (inequality), < (less than), > (greater
than), <= (less than or equal to), >= (greater than or
equal to)
String = (equality), <> (inequality), < (less than), > (greater
than), <= (less than or equal to), >= (greater than or
equal to)
B.4. OPERATORS 341

' Declares an integer variable named N . It will have an


' initial value of 0.
Dim N As Integer

' Declares a floating point variable named X2 , and


' initializes it to 1.5.
Dim X2 As Double = 1.5

' Declares a string variable named Phrase , and


' initializes it to the text string " Hello , World !".
Dim Phrase As String = " Hello , World !"

' Declares a string variable named ElementName , and


' initializes it to the name of the modelling element
' containing the formula .
Dim ElementName As String = Element . Name

' Declares a boolean variable named Loaded , and


' initializes it to True if the X2 variable is
' greater than 0 and False otherwise .
Dim Loaded As Boolean = ( X2 > 0)

In Visual Basic, variable names are not case-sensitive. This means that
you can refer to the example variable named X2 by either X2 or x2. It is
good idea, however, to get into the habit of being as consistent as possible
with the case of variable names, as some programming languages (e.g., C#,
Java and Python) are case sensitive and would consider X2 and x2 to be
dierent variables.

B.4 Operators
Variables are manipulated using operators. Table B.2 lists the most com-
mon operators for each data type. Here are some examples of using these
operators:
' BOOLEAN VARIABLES : R will be assigned a value of True
' if and only if P and Q are both False ; otherwise it
' will be assigned a value of false .
R = Not ( P Or Q )

' INTEGER OR DOUBLE VARIABLES : D will be assigned the


' result of muliplying the negation of the value of A
' by the sum of B and C .
342 APPENDIX B. VISUAL BASIC INTRODUCTION

D = -A * ( B + C )

' INTEGER VARIABLES : B will be assigned the remainder


' of dividing the value of A by 2 , i . e . , B will be
' assigned 0 if A is even , and 1 if A is odd .
B = A Mod 2

' DOUBLE VARIABLES : B will be assigned the square root


' of A .
B = A ^ 0.5

' STRING VARIABLES : The string variable T will be


' assigned the phrase " Hello , World !".
S = " Hello "
T = S & " , World !"

The above operators will (normally) evaluate to a value of the same type
as their operands. Visual Basic supports another set of operators that al-
ways evaluate to a boolean value regardless of their operands. These are the
comparison operators, and they are summarized in Table B.3.
For text strings, a string S1 is considered to be less than another string
S2, if S1 precedes S2 when sorted alphabetically. Similarly, S1 is considered
to be greater than S2, if S1 follows S2 when sorted alphabetically.
There are several examples of using comparison operators in the section
on conditional statements below.

B.5 Data Type Conversions


It is often necessary to convert the value of variables of one type to another.
This happens most frequently in the case of converting the value of a variable
to a string so that it can be displayed as text. Visual Basic provides a set
of functions for accomplishing this that are shown in Table B.4. Here are a
couple of examples showing how to use these functions:
' Converts the string literal "10" to the integer 10
' and stores it in the variable N .
Dim N As Integer = CInt ("10")

' Converts the value of the variable X to a string ,


' appends the result to the string " Truck Capacity : " ,
' and then exits the formula with a return value equal
' to the concatenated string .
B.6. CONDITIONAL STATEMENTS 343

Table B.4: Visual Basic Conversion Functions

Function Description
CBool(<Argument>) Converts the specied argument to a boolean.
CInt(<Argument>) Converts the specied argument to an integer.
CDbl(<Argument>) Converts the specied argument to a double.
CStr(<Argument>) Converts the specied argument to a string.

Return " Truck Capacity : " & CStr ( X )

B.6 Conditional Statements


A conditional statement allows you to branch ow of execution depending
on whether a certain condition is true or false. The primary conditional
statement in Visual Basic is the If...Then...End If statement. Its syntax is
as follows:
If < Condition > Then
< Statements >
End If
The <Condition> portion of this statement must be a boolean expression, i.e.,
something with a value of True or False. This could be a boolean variable,
but is more often an expression involving one of the comparison operators
discussed above. Here's an example of using If...Then...End If statements:
' Checks if the value of the variable Size is equal to
' the string literal " Large " , and if so exists the
' current formula with a return value of 20.
If Size = " Large " Then
Return 20
End If

' Checks if the value of the variable Size is equal to


' the string literal " Small " , and if so checks if the
' value of the variable X is greater than or equal to
' 10 , returning 15 if it is and 12 if it is not .
If Size = " Small " Then
If X >= 10 Then
Return 15
End If
Return 12
344 APPENDIX B. VISUAL BASIC INTRODUCTION

End If

' Exit the formula with a return value of 10 if none of


' the conditions above are satisfied .
Return 10
The If...Then...End If If statement can be extended by the introduction
of an Else clause:
If < Condition > Then
< Statements >
Else
< Statements >
End If
And can be further extended by the introduction of one or more ElseIf clauses
(you can have as many as you require):
If < Condition > Then
< Statements >
ElseIf < Condition > Then
< Statements >
ElseIf < Condition > Then
< Statements >
.
.
.
ElseIf < Condition > Then
< Statements >
Else
< Statements >
End If
Using these additional features, the example above can be rewritten more
elegantly as:
If Size = " Large " Then
Return 20
ElseIf Size = " Small " Then
If X >= 10 Then
Return 15
Else
Return 12
End If
Else
Return 10
End If
B.7. LOOPS 345

B.7 Loops
Loops allow you to repeat the same block of code multiple times. Visual
Basic supplies a number of dierent types of looping constructs. The sim-
plest of these is the While...End While statement, which causes ow of exe-
cution to loop as long as a certain condition is satised. The syntax for the
While...End While statement is:
While < Condition >
< Statements >
End While
And here's an example of using a While...End While statement:
' Writes the text " Hello , World !" to trace output a
' number of times equal to the value of the variable N .
' When the loop exits , the variable N will have a value
' of 0.
While N > 0
TraceLine (" Hello , World !")
N = N - 1
End While
Another commonly used looping construct is the For...Next statement,
which allows you to repeat a block of code a specied number of times.
It diers from the While...End While statement in that you must supply a
counter variable. The syntax of the For...Next statement is:
For < Counter > As < Data Type > = < Start > To < Finish >
< Statements >
Next
The following example is similar to the one above for the While...End While
statement, but illustrates how the counter variable can be used inside the
body of the loop:
' Writes the text " Hello , World !" to trace output a
' number of times equal to the value of the variable N .
' This time , the phrase is prefixed by the iteration
' number . When the loop exits , the value of the variable
' N will not have changed .
For I As Integer = 1 To N
TraceLine ( CStr ( I ) & " Hello , World !")
End While
346 APPENDIX B. VISUAL BASIC INTRODUCTION
Appendix C
Formula Properties and Methods

NOTE: Optional parameters are enclosed in square brackets.

C.1 Engine and Associated Properties


Engine Gets the current simulation engine.

DateNow Gets the current date/time.

TimeNow Gets the current absolute simulation time.

Engine.RunIndex Gets the zero-based index of the current run.

C.2 Scenario and Associated Properties


Scenario Gets the current scenario.

GN(i) Gets or sets global integer attribute i.

GS(i) Gets or sets global text attribute i.

GX(i) Gets or sets global oating-point attribute i.

347
348 APPENDIX C. FORMULA PROPERTIES AND METHODS

C.3 Entity and Associated Properties


Entity Gets the current entity.

LN(i) Gets or sets local integer attribute i.

LS(i) Gets or sets local text attribute i.

LX(i) Gets or sets local oating-point attribute i.

C.4 Accessing and Manipulating Elements


AlterServers(resource, servers) Changes the total number of servers
available at the Resource element named resource to servers.

CloseValve(valve) Closes the Valve element named valve.

CollectCost(cost, category, description, uom, quantity, unitCost)


Collects a cost to the Cost element named cost. The cost will be col-
lected under the category category with description description, unit of
measure uom, quantity quantity, and unit cost unitCost.

CollectStatistic(statistic, value) Collects the value value into the Statis-


tic element named statistic.

Count(counter) Gets the current count at the Counter element named


counter.

GetStockValue(stock) Gets the current value of the Stock element named


stock.

OpenValve(valve) Opens the Valve element named valve.

QueueLength(element) Gets the number of entities queued at the element


named element.

ServersAvailable(element) Gets the number of servers currently available


at the Resource or Task element named element.

ServersInUse(element) Gets the number of servers currently in use at the


Resource or Task element named element.
C.5. DISTRIBUTION SAMPLING 349

SetStockValue(stock, value) Sets the current value of the Stock element


named stock to value.

TransferTo(element, [inputPoint]) Transfers the current entity to the


input point named inputPoint of the element named element. There
is no need to specify the input point if the element only has one input
point.

C.5 Distribution Sampling


SampleBeta(shape1, shape2, [low, high]) Samples a beta distribution
with shape parameters shape1 and shape2 and a range of low to high.
If the low and high parameters are omitted, the range is assumed to
be 0 to 1.

SampleExponential(mean) Samples an exponential distribution with a


mean of mean.

SampleGamma(shape, scale) Samples a gamma distribution with a


shape parameter of shape and a scale parameter of scale.

SampleLogNormal(location, shape) Samples a log-normal distribution


with a location parameter of locationand a shape parameter of shape.

SampleNormal(mean, standardDeviation) Samples a normal distribu-


tion with a mean of mean and a standard deviation of standardDevia-
tion.

SampleTriangular(low, high, mode) Samples a triangular distribution


with a range of low to high and a mode of mode.

SampleUniform(low, high) Samples a uniform distribution with a range


of low to high.

C.6 Mathematics
System.Math.Cos(d) Returns the cosine of an argument, d, specied in
units of radians.
350 APPENDIX C. FORMULA PROPERTIES AND METHODS

System.Math.E Returns the constant value e.

System.Math.Exp(d) Returns e raised to the power of d.

System.Math.Log(d) Returns the natural (base e ) logarithm of d.

System.Math.Log10(d) Returns the base 10 logarithm of d.

System.Math.PI Returns the constant value π .

System.Math.Sin(d) Returns the sine of an argument, d, specied in units


of radians.

System.Math.Sqrt(d) Returns the square root of d.

System.Math.Tan(d) Returns the tangent of an argument, d, specied in


units of radians.

C.7 Requesting and Releasing Resources


PreemptResource([entity], resource, point, le, priority) Preempts
a server of the Resource element named resource for the entity entity
with a priority of priority. The entity will wait in the File element
named le if the server is unavailable and be transferred to the con-
nection point point when granted the server. If the entity parameter is
omitted, the current entity is assumed.

ReleaseResource([entity], resource, quantity) Releases ownership of


quantity servers of the Resource element named resource for the en-
tity entity. If the entity parameter is omitted, the current entity is
assumed.

RequestResource([entity], resource, quantity, point, le, [priority])


Requests quantity servers of the Resource element named resource for
the entity entity with a priority of priority. The entity will wait in the
File element named le if the servers are unavailable and be transferred
to the connection point point when granted the servers. If the entity
parameter is omitted, the current entity is assumed. If the priority is
omitted, the request is made with a priority of 0.
C.8. SCHEDULING EVENTS 351

C.8 Scheduling Events


ScheduleEvent([entity], point, dateTime) Schedules an event for the
entity entity at the date/time specied by dateTime. When the event is
processed, the entity will be transferred to the connection point point.
If the entity parameter is omitted, the current entity is assumed.
ScheduleEvent([entity], point, interval) Schedules an event for the en-
tity entity at the current simulation time plus interval time units.
When the event is processed, the entity will be transferred to the con-
nection point point. If the entity parameter is omitted, the current
entity is assumed.
ScheduleEvent([entity], point, timeSpan) Schedules an event for the
entity entity at the current simulation time plus the time span timeS-
pan. When the event is processed, the entity will be transferred to the
connection point point. If the entity parameter is omitted, the current
entity is assumed.

C.9 Terminating Simulation


HaltRun() Terminates the current run.
HaltScenario() Terminates the current scenario.

C.10 Writing to the Trace Window


Trace(message, [category]) Writes the string message to the trace win-
dow under category category.

Trace(value, [category]) Writes the string representation of the object


value to the trace window under category category.
TraceLine(message, [category]) Writes the string message followed by a
new line character to the trace window under category category.
TraceLine(value, [category]) Writes the string representation of the ob-
ject value followed by a new line character to the trace window under
category category.
352 APPENDIX C. FORMULA PROPERTIES AND METHODS
References
AbouRizk, S. M., & Hague, S. (2009). An overview of the cosye
environment for construction simulation. In M. D. Rossetti,
R. R. Hill, B. Johansson, A. Dunkin, & R. G. Ingalls (Eds.),
Proceedings of the 2009 winter simulation conference (pp. 26242634).
Piscataway, NJ: IEEE.
AbouRizk, S. M., & Hajjar, D. (1998). A framework for applying
simulation in the construciton industry. Canadian Journal of Civil
Engineering , 25 (3), 604617.
AbouRizk, S. M., & Mohamed, Y. (2000). Simphony  an integrated
environment for construction simulation. In J. A. Joines,
R. R. Barton, K. Kang, & P. A. Fishwick (Eds.), Proceedings of the
2000 winter simulation conference (pp. 19071914). Piscataway, NJ:
IEEE.
Adrian, J. J., & Boyer, L. T. (1976). Modeling method productivity.
Journal of the Construction Division , 102 (1), 147168.
Beaumont, M. A., Zhang, W., & Balding, D. J. (2002). Approximate
bayesian computation in population genetics. Genetics , 162 ,
20252035.
Box, G. E. P., & Muller, M. E. (1958). A note on the generation of random
normal deviates. The Annals of Mathematical Statistics , 29 (2),
610611.
Bratley, P., Fox, B. L., & Schrage, L. E. (1983). A guide to simulation.
New York, NY: Springer-Verlag.
By Wikipedians (Eds.). (n.d.). Abstraction. Retrieved from
http://books.google.ca/
books?id=95i1W0ZGrksC&lpg=PP1&pg=PA3#v=onepage&q&f=false
CACI Advanced Simulation Lab. (2014). Simscript (version iii) [Computer
software]. San Diego, CA: CACI Advanced Simulation Lab.

353
354 REFERENCES

Cheng, R. C. H. (1978). Generating beta variates with nonintegral shape


parameters. Communications of the ACM , 21 (4), 317322.
City of Edmonton. (n.d.). Drawing showing details of the selected tunnel
project [Image].
City of Edmonton. (2011). Sample drawing showing the design of a
tunnelling project [Image].
City of Edmonton. (2012). North lrt to nait map [Image]. Retrieved from
http://www.gov.edmonton.ab.ca/transportation/
22751_New_Track_length-map.pdf
DeBrota, D. J., Dittus, R. S., Roberts, S. D., & Wilson, J. R. (1989).
Visual interactive tting of bounded Johnson distributions.
Simulation , 52 (5), 199205.
DoD. (2003). Department of defense (dod) modeling and simulation
verication, validation, and accreditation.
Encyclopedia Britannica. (2014a). Computer simulation. Retrieved from
http://www.britannica.com/EBchecked/topic/130683/
computer-simulation
Encyclopedia Britannica. (2014b). System. Retrieved from
http://www.britannica.com/EBchecked/topic/579111/system
Federal Highway Administration. (2009). Technical manual for design and
construction of road tunnels - civil elements. Retrieved from http://
www.fhwa.dot.gov/bridge/tunnel/pubs/nhi09010/index.cfm
Fishman, G. S. (1977). Principles of discrete event simulation. New York,
NY: Wiley.
Hahn, G. J., & Shapiro, S. S. (1967). Statistical models in engineering.
New York, NY: John Wiley Sons.
Hairer, E., Nørsett, S., & Wanner, G. (1993). Solving ordinary dierential
equations I: Nonsti problems. Berlin: Springer-Verlag.
Halpin, D. W. (1973). An investigation of the use of simulation networks
for modeling construction operations (Unpublished doctoral
dissertation). University of Illinois, Urbana-Champaign, III.
Halpin, D. W. (1977). CYCLONEmethod for modeling job site
processes. Journal of the Construction Division , 103 (3), 489499.
Halpin, D. W., & Woodhead, R. W. (1980). Construction management.
New York, NY: John Wiley Sons.
IEEE. (2000). Standard for modeling and simulation (m&s) high level
architecture (hla)-federate interface specication 1516.1-2000.
Piscataway, NJ: IEEE.
REFERENCES 355

Johnson, N. L. (1949). Systems of frequency curves generated by methods


of translation. Biometrika , 36 , 149176.
Kemeny, J. G., & Kurtz, T. E. (1968). BASIC: A manual for BASIC, the
elementary algebraic language designed for use with the dartmouth
time sharing system. Hanover, NH: Dartmouth College Computation
Center.
Klingener. (1996). Unknown title. Unknown Journal .
Kreyszig, E. (2011). Advanced engineering mathematics. New York, NY:
John Wiley Sons.
Laplace, P. S. (1812). Théorie analytique des probabilités. Paris, France:
Courcier.
Law, A. M., & Kelton, D. W. (1991). Simulation modeling and analysis.
New York, NY: McGraw-Hill.
Law, A. M., & McComas, M. G. (1986). Pitfalls in the simulation of
manufacturing systems. In Proceedings of the 18th conference on
winter simulation (pp. 539542). Washington, D.C..
Lehmer, D. H. (1951). Mathematical methods in large-scale computing
units. In Proceedings of the second symposium on large scale digital
computing (pp. 141146).
Lewis, A. S., P. A. Goodman, & Miller, J. M. (1969). A pseudo-random
number generator for the system/360. IBM System Journal , 8 ,
136-146.
Macal, C. M., & North, M. J. (2009). Agent-based modeling and
simulation. In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, &
R. G. Ingalls (Eds.), Proceedings of the 2009 winter simulation
conference (pp. 8698). Piscataway, NJ: IEEE.
Macal, C. M., & North, M. J. (2011). Introductory tutorial: Agent-based
modeling and simulation. In S. Jain, R. R. Creasy, J. Himmelspach,
K. P. White, & M. Fu (Eds.), Proceedings of the 2011 winter
simulation conference (pp. 14561469). Piscataway, NJ: IEEE.
Mann, H. B., & Wald, A. (1942). On the choice of the number of class
intervals in the application of the chi square test. The Annals of
Mathematical Statistics , 13 (3), 306317.
Martinez, J. C. (1996). Stroboscope  state and resource based simulation
of construction processes (Unpublished doctoral dissertation).
University of Michigan, Ann Arbor, MI.
Nelder, J. A., & Mead, R. (1965). A simplex method for function
minization. The Computer Journal , 7 (4), 308313.
356 REFERENCES

Newmann. (2010). Unknown title. Unknown Journal .


Palisade. (2015). @risk [Computer software]. Ithaca, NY: Author.
Pearson, E. S., D'Agostino, R. B., & Bowman, K. O. (1977). Test for
departure from normality: Comparison of powers. Biometrica , 64 (2),
231246.
Pritsker, A. A. B., O’Reilly, J. J., & Laval, D. K. (1997). Simulation with
visual slam and awesim. New York, NY: John Wiley Sons.
Qiu, T. (2015). Trac model [Image].
Rabcewicz, L. (1948). Österreichisches Patent Nr. 165573. Austria:
Österreichisches Patentamt.
Rockwell Automation. (2000). Arena [Computer software]. Wexford, PA:
Rockwell Automation.
Royston, J. P. (1982). The w test for normality. Applied Statistics , 31 (2),
176180.
Ruwanpura, J. Y., AbouRizk, S. M., Er, K. C., & Fernando, S. (2001).
Experiences in implementing special purpose simulation tool for
utility tunnel construction operations. Canadian Tunnelling Journal ,
181191.
Sargent, R. G. (2003). Verication and validation of simulation models. In
Proceedings of the winter simulation conference.
Sargent, R. G. (2007). Verication and validation of simulation models. In
Proceedings of the winter simulation conference.
Schmeiser, B. W. (1982). Batch size eects in the analysis of simulation
output. Operations Research , 30 , 556568.
Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for
normality. Biometrica , 52 (3-4), 591611.
SMA Consulting. (n.d.-a). 4d model of lrt tunnel [Image].
SMA Consulting. (n.d.-b). Computer model aided decision making system
for ooding control in shanghai [Image].
SMA Consulting. (n.d.-c). newdev sanitary service project map and
drawing [Image].
Sturges, H. A. (1926). The choice of class intervals. Journal of the
American Statistical Association , 21 (153), 6566.
The AnyLogic Company. (2000). Anylogic [Computer software]. St.
Petersburg, Russia: The AnyLogic Company.
von Neumann, J. (1951). Various techniques used in connection with
random digits. Journal of Research of the National Bureau of
Standards. Applied Mathematics Series , 12 , 3638.
REFERENCES 357

Welch, P. D. (1983). The statistical analysis of simulation results. In


S. S. Lavenberg (Ed.), Computer performance modeling handbook.
New York, NY: Academic Press.
Whitner, R. G., & Balci, O. (1989). Guideline for selecting and using
simulation model verication techniques. In Proceedings of the winter
simulation conference (pp. 559568).
Wilson, J. R. (1984). Statistical aspects of simulation. In Operational
research '84: Proceedings of the 10th annual international conference
on operational research (pp. 921937). Amsterdam.
Wilson, J. R. (1989).
(Unpublished notes)
358 REFERENCES
CONSTRUCTION SIMULATION
An Introduction Using SIMPHONY

Construction Simulation: An Introduction Using


Simphony is an introductory book to process interaction
simulation applied to construction engineering and
management. The book covers the principles of discrete
event simulation, continuous simulation, and combined
simulation concepts using the CYCLONE modelling
template and the General Purpose Modelling template
of Simphony.NET.

This first version of the book covers the basic concepts


required to develop models in CYCLONE and the General
Purpose Modelling templates. Discrete event, continuous,
and combined modelling are discussed with illustrations
from construction processes.

Simphony was first developed in 1998 by AbouRizk


and Hajjar. Its successor Simphony.NET was developed
by AbouRizk and Hague and continues to be enhanced
and extended at the University of Alberta through
Dr. AbouRizk’s research program. Simphony is a rich
modelling environment that is composed of simulation
services and a modelling user interface. It is based on
modular and hierarchical concepts that provide a
medium for deploying simulation modelling templates.

ISBN 978-1-55195-357-1

You might also like