Professional Documents
Culture Documents
Simulation Boo
Simulation Boo
Simulation Boo
Simaan M. AbouRizk
Stephen A. Hague
Ronald Ekyalimpa
Hole School of Construction Engineering
Department of Civil and Environmental Engineering
CONSTRUCTION SIMULATION
AN INTRODUCTION USING SIMPHONY
Simaan M. AbouRizk
Stephen A. Hague
Ronald Ekyalimpa
© 2016 by S. AbouRizk
COPYRIGHT NOTICE:
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any
means without written permission from the authors, except in the case of brief quotations
embodied in critical articles or reviews.
ISBN: 978-1-55195-357-1
Preface xi
Acknowledgements xv
Dedication xvii
1 Introduction to Simulation 1
1.1 Construction Engineering: Context . . . . . . . . . . . . . . . 1
1.2 Engineers Work with Models . . . . . . . . . . . . . . . . . . . 3
1.3 Responsibilities of Construction Engineers . . . . . . . . . . . 8
1.4 Simulation Denitions . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Types of Simulation . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.1 Dynamic Simulation Models . . . . . . . . . . . . . . . 12
1.5.2 Discrete Event Simulation Models . . . . . . . . . . . . 12
1.5.3 Continuous Change Models . . . . . . . . . . . . . . . 14
1.5.4 Static Simulation Models . . . . . . . . . . . . . . . . . 14
1.5.5 Deterministic Simulation Models . . . . . . . . . . . . 14
1.5.6 Stochastic/Monte Carlo Simulation Models . . . . . . . 17
1.5.7 Other Types of Simulation . . . . . . . . . . . . . . . . 17
1.5.8 4-D Modelling and Animations . . . . . . . . . . . . . 17
1.5.9 Agent-Based Modelling . . . . . . . . . . . . . . . . . . 17
1.5.10 System Dynamics . . . . . . . . . . . . . . . . . . . . . 18
1.6 Modelling Systems . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.1 Modelling Dynamic Systems . . . . . . . . . . . . . . . 19
1.6.2 A Simple Truck-Shovel Problem . . . . . . . . . . . . . 20
1.7 Simulation Software . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8 Developing Simulation Models . . . . . . . . . . . . . . . . . . 25
1.9 Applications of Simulation in Construction . . . . . . . . . . . 27
v
vi CONTENTS
2 Review of Statistics 29
2.1 Input Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.1 Identifying an Appropriate Distribution . . . . . . . . . 31
2.1.2 Estimating Distribution Parameters . . . . . . . . . . . 34
2.1.3 Testing for Goodness of Fit . . . . . . . . . . . . . . . 34
2.2 Output Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.1 Developing Point and Interval Estimates . . . . . . . . 36
2.3 Selecting Distributions in Simphony.NET . . . . . . . . . . . . 38
2.4 Example of Input Modelling and Output Analysis . . . . . . . 41
xi
xii PREFACE
xv
xvi ACKNOWLEDGEMENTS
Dedication
To our families that continue to support our work without reservations. . .
xvii
xviii DEDICATION
Chapter 1
Introduction to Simulation
1
2 CHAPTER 1. INTRODUCTION TO SIMULATION
created and houses can be developed. The entire scope of the development
project is outside the scope of this example, so our scope of work will be
dened solely as providing sanitary servicing to this new development. We
start our work in the planning phase, where numerous options for providing
sanitary servicing to the NewDev are being developed. Options would include
a trunk sewer connecting the NewDev sanitary system to the main trunk
servicing the area, as shown in Figure 1.2 (the main trunk in this case runs
along 170 Street). This could be a gravity line running along 100 Avenue, or a
forced main line using a collection area and a pump station. The methods of
construction vary from open cut for shallow vertical alignments to trenchless
methods for deeper ones.
The execution phase that starts after the concept is complete involves
signicant engineering input and is generally divided into three distinct sub-
phases: design, construction and commissioning. These phases may overlap.
The engineering sub-phase is generally divided into concept design, prelimi-
nary design, and detailed design. The construction phase will have dierent
sub-phases, and many interrelated contracts and work packages. The com-
missioning phase serves to ensure that the facility functions as designed.
The operation phase is the phase where the project owner (municipality or
developer) can operate and use the facility.
Figure 1.2: Sample Project Sanitary Servicing for a New Development (SMA
Consulting, n.d.-c)
4 CHAPTER 1. INTRODUCTION TO SIMULATION
the collection point of the sewer system, its end location and its potential
alignments.
In subsequent detailing of the design, the drainage engineer will identify
the network of pipes that will service the area based on the hydraulic models,
and by identifying the manner in which this sewer will be connected into the
main network within the City. The results are described in the form of draw-
ings, as demonstrated in Figures 1.3 and 1.4, and in the form of documents
describing general contract requirements (common to all similar projects),
special contract requirements (unique to this project) and specications to
be followed by the contractor when they build the sewer line.
The drainage engineer uses hydraulic models (as shown in Figure 1.5) to
model the ow of storm water in the envisioned development during the con-
cept design. This is a numerical model, as it simply computes ow through
the various structures using mathematical equations. The drainage engineer
will project that ow, based on rainfall history in the area, the development
itself, the number of houses in the area, etc. The hydraulic model represents
the network of pipe and its capacity, the behaviour of the storm water based
on historical records, and the built area where the storms collect, and then
numerically simulates the ow of storm water on a computer. This will allow
the simulationist (this word is used throughout the book to describe a person
that is developing the simulation model, and as such, may refer to engineers,
managers, analysts and simulation team members) to select the right size
and grade of the new sewer line by inputting the required parameters into
the model. The engineer will also describe the assumptions made, the re-
quirements envisioned, and various issues that need to be addressed, such as
land drainage.
Similar to hydraulic engineers, transportation engineers develop and de-
ploy trac modelling to design roads, trac intersections, etc. For example,
the storm sewer we intend to build requires that a major shaft be constructed
on a major intersection. Transportation engineers will represent the roads,
the trac signals leading to that intersection, and the pattern of trac in
a trac model, as demonstrated in Figure 1.6. They will then be able to
subject this model to changes due to lane closure during construction and
be able to answer questions related to trac build up in the area, for exam-
ple. These models are process interaction models, involving combinations of
event driven simulation, mathematical formulations, and process interaction
simulations.
1.2. ENGINEERS WORK WITH MODELS 5
Figure 1.4: Drawing Showing Details of the Selected Tunnel Project (City of
Edmonton, n.d.)
1.2. ENGINEERS WORK WITH MODELS 7
The product that is being built should match the design and its intent
(which represents the idea envisioned by the owner).
The design and the execution plan should be free from errors (and any
errors should be identied as early in the design process as possible,
since mistakes tend to be more costly to correct later on in the project's
life cycle).
The construction methods chosen for the project should be feasible and
ecient.
10 CHAPTER 1. INTRODUCTION TO SIMULATION
the methods and resources required, when they are involved and how
they combine to complete the work,
2,000
Storage Tank Level (tb)
1,500
1,000
500
0
0 50 100 150 200 250 300 350
Simulation Time (days)
Relative Frequency
4%
Work Package A $10,000 $25,000 2%
0%
$10,000 $17,500 $25,000
Cost
Relative Frequency
4%
Work Package B $25,000 $50,000 2%
0%
$25,000 $37,500 $50,000
Cost
Relative Frequency 4%
Work Package C $20,000 $30,000 2%
0%
$20,000 $25,000 $30,000
Cost
6%
Relative Frequency
4%
2%
0%
$65,000 $75,000 $85,000 $95,000
Cost
not useful for decision making, but can be invaluable for model verication
and debugging.
Figure 1.13: Project Tracking and Control with 4-D Model (SMA Consulting,
n.d.-a)
world system, observe it, collect information about it, then we represent it
(model it).
Illustrated in Figure 1.14 is the simplest form of a queuing system: an
open queue that has customers arriving, being served by servers, and depart-
ing. The box represents the boundaries of the system.
For a period of time T , we measure:
Note that if the queuing system is in a steady state (i.e., the length of the
queue is not varying much), then we must have C ∼ = A and λ ∼ = µ. From
∼
this it follows that U = µS = λS . Finally, if L is the average number of
customers in the system and W is the average time each customer spends in
the system, then Little's law says that L = λW ∼ = µW .
As systems become more complicated, the problems become fairly com-
plicated, and it becomes tedious to solve with the method shown above.
However, we can simulate the problems. Let's look at a simple example of
a queuing system and solve it analytically (as discussed above) and using
simulation.
communicate with one another. This will be covered in detail later in the
book.
A number of dierent commercial and academic simulation software sys-
tems exist today. Some examples of simulation software developed in aca-
demic institutions include:
class diagrams,
state charts,
activity diagrams.
risk analysis,
value analysis,
estimating.
During/post construction:
29
30 CHAPTER 2. REVIEW OF STATISTICS
Start
Collect Data
and Construct
Histogram
Select
a
Distribution
Calculate
the Parameters
of the
Selected
Distribution
Check for
Goodness of
Fit Not acceptable
Acceptable
Stop
max{xi } − min{xi }
w = width of a cell = ,
Number of cells
low value of rst cell = min{xi }.
Using this scaling, the area of the scaled histogram will be:
k k k
X X mj w X w
wm0j =w = mj = n = 1,
j=1 j=1
wn wn j=1
wn
as desired.
This guideline will usually reveal the general layout of the data. A good
practice for selecting distributions is to identify a family of distributions for
use as an input model. Guidelines for selecting such a family are presented
by Wilson (1989) and can be summarized as follows:
3. It should allow for feasible variate generation (fast, exact, and accu-
rate).
Also, one should consider what the requirements or limitations of the simu-
lation language stipulate regarding generation of variates.
34 CHAPTER 2. REVIEW OF STATISTICS
Law and Kelton (1991), Fishman (1977), and other simulation and statistics
texts.
Visual assessment of the quality of the t is obtained by comparing plots
of the tted and empirical CDFs. This is usually done by applying common
sense rather than scientic analysis. Visual assessment of the quality of the
t, however, proves in many cases to be as powerful as any other test and
is usually applied in conjunction with the statistical tests. One could also
consider the t of the PDF to the histogram. This should not be taken as
conclusive, however, since the t cannot be nally judged unless one looks
at the CDF.
Testing for goodness of t using statistical tests is made easier when sta-
tistical tests are incorporated into the tting software. Fitting a distribution
to a data sample is both an art and a science. Using a exible family of distri-
butions is encouraged if the simulation software supports variate generation
from such families.
(2) whether the simulation reects a static, transient, or steady state. The
following discussion of output analysis is specic to that range of simulation
models that can be classied as transient simulations. Wilson (1984) denes
transient simulation as follows: a simulation is transient if the modelling
objective is to estimate parameters of a time-dependent output distribution
over some portion of a nite time horizon for a given set of initial conditions.
Most construction operations would be covered by this denition.
Wilson (1984) categorized the analysis of transient simulation by whether
or not normal distribution theory can be applied to the analysis. Two types
of analysis are relevant: (1) analysis of output parameters that do not signif-
icantly deviate from normality, and (2) analysis of output parameters that
have non-normal responses. Case 2 has not been frequently encountered in
simulation of construction processes. An extensive treatment of the analysis
of output data can be found in Welch (1983).
S
X̄ ± t(1−α/2),(n−1) √ ,
n
Estimating Probabilities
The probability of completing a job on time is also very valuable in a num-
ber of situations. A classic example would be the simulation of scheduling
networks (e.g., PERT type) in an attempt to determine the probability of
meeting a target date.
The cumulative distribution function FX of an output parameter X tells
us the probability that X does not exceed a particular xed value x:
FX (x) = Pr{X ≤ x} x ∈ R.
1. Create a .CSV le in which the data are stored in one column.
5. Click the Fit button and nd and select the .CSV le for importing.
10 40 45 376 79 29 15 20 33
244 15 37 195 34 15 25 170 45
60 5 15 20 10 15 55 350 99
74 60 30 10 30 55 13 60 145
15 510 20 40 20 30 114 120 10
330 59 66 559 815 10 50 15 377
555 19 20 10 41 5 62 25 30
85 30 120 10 65 36 570 30 58
92 143 36 25 72 20 25 567 35
390 93 30 15 242 30 15 20 30
20 118 300 32 29 60 40 169 20
61 75 10 185 60 90 55 116 19
10 36 63 508 30 60 30 60 230
542 112 10 75 50 342 25 15 15
39 1079 100 60 130 75 10 22 25
636 35 45 6 30 160 15 75 53
591 30 898 25 120 25 45 52 30
85 94 94 15 38 20 214 30 535
133 466 25 20 155 15 21 60 639
100 106 15 15 40 20 29 10 630
50 62 140 180 105 124 15 27 91
40 89 65 104 449 125 75 30 49
60 20 25 5 153 32 15 19 123
852 32 20 104 11 30 30 22 30
24 30 10 116 20 79 20 60 298
110 10 5 5 8 15 35 20 40
179 45 69 7 567 180 20 20 49
140 45 138 45 20 69 110 20 429
151 478 20 1000 9 10 15 117 10
20 10 709 30 15 43 37 88
2.4. EXAMPLE OF INPUT MODELLING AND OUTPUT ANALYSIS 45
49
50 CHAPTER 3. VERIFICATION AND VALIDATION
logical errors,
syntax errors,
data errors,
using, resulting in syntax errors in the code snippets they are trying
to embed into their models. Examples of this type of error include
wrong declarations, incorrect conversion of types, inappropriately or-
dering the values for the parameters for statistical distributions, etc.
In most cases, these types of errors will be trapped by the simulation
environment as the model is run or during development.
There are several ways that simulationists can verify that their models
are working as intended. Examples of these include:
The use of entity counters. Another way to check for the presence of
logical errors is through the use of counters in the model to track the
ow of entities as simulation events evolve.
Performing unit tests. Unit tests are a popular way to conrm whether
a newly introduced algorithm was implemented correctly into a simula-
tion environment, and performs well. The typical approach is to create
a model that works and whose results are veried. When new pieces
are introduced into the model (e.g., new user written code, new models,
etc.), the unit test is run. If the results are the same as expected then
the model did not introduce new errors.
Various denitions for model validation exist in the literature. For exam-
ple, Sargent (2003) denes validation as, the substantiation that a comput-
erized model within its domain of applicability possesses a satisfactory range
of accuracy consistent with the intended application of the model.
The following are practical validation approaches (based on Sargent
(2003)) we found useful in construction engineering and management simu-
lation modelling applications:
Historical data validation: If historical data exist (or if data are col-
lected on a system for building or testing a model), part of the data is
used to build the model and the remaining data are used to determine
(test) whether the model behaves as the system does. We commonly
use this in training articial neural networks.
54 CHAPTER 3. VERIFICATION AND VALIDATION
to assess the tness of data for use and disregard that which is found to be
bad. In addition to disregarding bad data, one can make recommendations
on good collection and archiving procedures.
In the context of simulation model development and validation, data may
be used in one of two ways, i.e., operational model validation and conceptual
model development. Data used in operational validation of simulation models
can be categorized into two for convenience. These categories are input data
and output data. Data may also be used in the generation of mathematical or
logical relationships that are in turn used in the development of the concept
and model.
Prior to utilizing data in the model development and validation processes,
it should be subjected to a number of tests to conrm its validity. According
to Sargent (2007), these tests may include internal consistency checks and
checks to establish the existence and correctness of outliers.
3. the methods and resources required, when they are involved and how
they combine to complete the work,
59
60 CHAPTER 4. MODELLING WITH CYCLONE
and directional arrows. Then, we use virtual entities that represent resources
and follow their journeys in the model to describe the dynamic aspect of the
construction process. The simulation is done using a computer, but can also
be done manually for simple models.
In this chapter we will outline how to develop simulation models using
CYCLONE and how to simulate them within Simphony. First, we cover
the graphical modelling elements of CYCLONE and their rules. Then, we
will detail developing CYCLONE models, hand simulation, and computer
simulation. We will conclude with practical applications of construction pro-
cesses.
Why don't they have a separate line for those with picky orders that
take forever? My order is straight forward!
How long is the average customer waiting and at what point do they
decide it is not worth it and go somewhere else?
You then start planning how they could change the system to make it more
ecient! You are already simulating.
Customer Pool
on paper. We'll assume that customers will arrive and be served according
to the data shown in Table 4.1.
At the start of simulation, our model might look like Figure 4.3. The
stars in the Q" labelled Customer Pool" represent the 5 customers who will
be arriving at the coee shop, while the star in the Q" labelled Server"
represents the single server working at the shop.
Once simulation begins, the ve customers will move from the Q" labelled
Customer Pool" to the box labelled Arrival." Once there, the box will hold
them for the amount of time specied in the second column of Table 4.1, i.e.,
it will hold one of the customers for 3.34 minutes, one for 5.54 minutes, one
for 8.05 minutes, and so on. The state of the model at this point is shown in
Figure 4.4.
The model will remain in this state for 3.34 minutes as nothing else can
happen until the rst customer arrives at the coee shop. Once this amount
of time has elapsed, a customer will exit the box labelled Arrival" and enter
the Q" labelled Customer Queue." The state of the model at this point is
shown in Figure 4.5.
64 CHAPTER 4. MODELLING WITH CYCLONE
The customer is now waiting to be served, and since the server is currently
idle, this process can begin immediately. Both the server and the customer
move from the Q" they're currently located in to the box labelled Service,"
as shown in Figure 4.6. From Table 4.1, we see that they will remain in that
box for 3.01 minutes, i.e., they will leave it when the simulation time reaches
3.34 (the current simulation time), plus 3.01 (the service duration), so after
6.35 minutes.
Nothing else can happen in the model until the server nishes serving
the rst customer at time 6.35. When the simulation reaches that point, the
rst customer will leave the box labelled Service" and enter the Q" labelled
Served Customers," while the server will leave the box and return to the
Q" labelled Server." The state of the model is shown in Figure 4.8.
Now that the rst customer has left the system, we should calculate
the amount of time he/she spent in the coee shop. Looking back on our
discussion, we see that he/she arrived at time 3.34 and left at time 6.35, so
he/she spent a total of 6.35 − 3.34 = 3.01 minutes in the system.
Having nished with the rst customer, the server is now available to serve
the second. The server and the second customer both leave their respective
Qs" and enter the box labelled Service." The state of the model at this
point is shown in Figure 4.9.
66 CHAPTER 4. MODELLING WITH CYCLONE
It takes 2.78 minutes to serve the second customer, so they will be held
in the Service" box until time 6.35 + 2.78 = 9.13. Before the simulation
can reach this point, however, the third customer is scheduled to arrive (at
time 8.05). When this happens, a customer will move from the box labelled
Arrival" to the Q" labelled Customer Queue," as shown in Figure 4.10.
As with the second customer, this one will be forced to wait as the server is
busy.
Nothing further will happen until the second customer exits the shop
at time 9.13. When the simulation reaches this point, the second customer
will leave the box labelled Service" and enter the Q" labelled Served Cus-
tomers," while the server will leave the same box and return to the Q" la-
belled Server." The state of the model at this point is shown in Figure 4.11.
4.2. CYCLONE 67
Now that the second customer has left the system, we should calculate the
amount of time she spent in the coee shop. Looking back on our discussion,
we see that she arrived at time 5.54 and left at time 9.13, so she spent a total
of 9.13 − 5.54. = 3.59 minutes in the system.
We leave it as an exercise for the reader to continue this simulation until
the fth and nal customer exits the coee shop at time 26.75. The state of
the system at that point is shown in Figure 4.12.
The results of our simulation are shown in Table 4.2. From these results,
we can see that on average, each customer spends 4.31 minutes in the coee
shop, which is under the 5 minutes the owner hopes to achieve.
4.2 CYCLONE
CYCLONE, which stands for CYCLic Operations Network, is a construc-
tion simulation language introduced by Halpin in 1977. Halpin's approach
revolves around the concept that construction operations can be abstracted
in the form of cyclic networks of modelling elements that represent the tran-
sition of resources between two states: an active state and an idle state.
68 CHAPTER 4. MODELLING WITH CYCLONE
3. The modelling elements process the virtual entities as they arrive and
release them to subsequent elements upon completion.
There are two basic elements in CYCLONE: a Task element and a Queue
element. The Task element represents the active state of a resource and can
be of two forms: constrained, requiring a combination of resources prior to
allowing resources to ow to the next element (called a Combi), or uncon-
strained, where resources ow through it unhindered. The Queue element
represents the idle state of the resource. That is where resources that cannot
proceed to other elements wait. There are other elements to regulate the
ow of resources in the model. Those include the following:
4.2. CYCLONE 69
Production counter,
Function, and
Models also include entities (abstract elements) and arrows that connect
elements and dictate direction of ow for entities emanating from the element.
The simplied model in Figure 4.13 demonstrates the modelling principles
in CYCLONE.
of the entities will ow to Queue 2 and one to Task 4. The following sections
detail each of the elements and their functions.
Duration (input): The duration of the task. The time can be constant or
random as required, though the value should never be negative. Be
especially wary of probability distributions that are unbound below
(the normal distribution, for example). Simphony will issue a warning
if you specify such a distribution.
Priority (input): A Combi element with a higher priority will get prefer-
ence in receiving entities from Queue elements over those with a lower
priority.
Duration (input): The duration of the task. The time can be constant or
random as required, though the value should never be negative. Be
especially wary of probability distributions that are unbound below
(the normal distribution, for example). Simphony will issue a warning
if you specify such a distribution.
Count (output): The number of entities that passed through the Counter
during simulation.
Time (output): The simulation time at which the most recent passing en-
tity was observed. Note that this need not be the time at which simu-
lation nished, although if the Counter was responsible for terminating
simulation (i.e., the limit was reached), it will be.
Inputs (input): The number of input points the element should have.
Outputs (input): The number of output points the element should have.
Problem Statement
A construction project has been dened as per the scope of work it involves.
The contractor selected to do that work has chosen to dedicate one excavator
to the operation. Details of the operation are as follows:
a front-end loader picks dirt from this pile and places it onto waiting
trucks,
ignore the eects of trac, road proles (grade) and road roughness.
The contractor has no choice as far as truck capacity is concerned but would
like to maximize the production that his other equipment can produce by
committing as many trucks to the project as he/she possibly can. At the
same time, the contractor does not want to have a situation where some of the
trucks that have been committed to the project are redundant because they
will use up part or all of the anticipated prot from the project. A schematic
layout of a simplied earth moving operation is shown in Figure 4.14.
Solution
The model input parameters, based on the project scope denition, prevail-
ing site conditions and equipment operational attributes (based on manu-
facturer's specications and observations on past similar projects), are sum-
marized in Table 4.3. The layout of the developed model is presented in
Figure 4.15.
4.2. CYCLONE 79
# Parameter Value
1. Initial quantity of dirt to be excavated (cubic yards) 8,900
2. Track capacity (cubic yards) 8.9
3. Trucks available 5
4. Excavators available at the loading area 1
5. Loaders available at the loading area 1
6. Spotters available at the dumpsite 1
7. Dozers available at the dumpsite 1
8. Excavation duration for 8.9 cubic yards (minutes) 1.2
9. Loading duration for 8.9 cubic yards (minutes) 2.8
10. Haul duration for each truck (minutes) 19.1
11. Return duration for each truck(minutes) 15.6
12. Dumping duration for 8.9 cubic yards (minutes) 2.8
13. Spreading duration for 8.9 cubic yards (minutes) 8.5
This model layout has a total of 5 cycles: a dirt excavation cycle (Ex-
cavator Cycle), a loading cycle (Loader Cycle), a truck hauling/return cycle
(Truck Cycle), a spotter cycle, and a dozer dirt spreading cycle (Dozer Cy-
cle). Each of the cycles and the ow units within them will be discussed in
detail in the following paragraphs.
In developing the model, the simulationists make assumptions on what
the virtual entities in the model will represent in the various parts of the
model. At the Queue element labelled Initial Dirt," an initial number of
1,000 entities is entered into the Initial property of that Queue element to
model the total volume of 8,900 cubic yards to be excavated. It is assumed
that each entity represents one truck load (8.9 cubic yards) of dirt. At the
start of the simulation (i.e., at time zero), all of these entities will be created.
Entities labelled Excavators", Loaders," Spotters," and Dozers" are also
created in the Queue elements. These entities represent an excavator, a
loader, spotter, and a dozer, respectively. The initial quantity specied in
the Trucks Queue element is 5, each of the ve entities representing a truck.
Simulation event processing commences with one entity from the Initial
Dirt Queue combining with one entity from the Excavators Queue within
the Combi labelled Excavate. Processing of events within other Combi
elements is not possible because their Queue elements do not have at least
one entity. The combined entity is held within the Excavate Combi for 1.2
minutes (the time the excavator takes to excavate 8.9 cubic yards of dirt).
Thereafter, the entity is released and gets cloned so that the original entity
is routed back into the Excavators Queue element and the cloned entity is
routed into the Excavated Dirt Queue element. The entity routed into the
Excavated Dirt Queue element represents 8.9 cubic yards of excavated dirt
that has been placed in a stock pile. The second cycle for dirt excavation now
starts with the combination of the excavator entity with another 8.9 cubic
yard dirt entity in the Excavate Combi. This cyclic process continues until
the entities in the Initial Dirt Queue run out.
As the second excavation cycle begins, the loading of the rst truck also
begins with an entity from the Excavated Dirt Queue combining with an
entity from the Loaders Queue and an entity from the Trucks Queue
within the Load Combi. The combined entity is held within the Load
Combi for 2.8 minutes (the duration required to load and ll a 8.9-cubic-yard
truck). After this activity, an entity is routed out into the Haul Normal
while another entity is cycled back into the Loaders Queue to begin another
truck loading cycle, if there are entities present in the Excavated Dirt and
4.2. CYCLONE 81
Trucks Queues. The entity entering the Haul Normal represents a loaded
truck traveling from the loading area to the dumpsite. Entities entering this
element will be held for 19.1 minutes.
Loaded trucks arriving at the dumpsite are routed into a Queue element
labelled Dump Queue. Loaded truck entities wait in here for a spotter to
direct them on where to dump their load. There is 1 spotter at the dumpsite
who is represented by an entity which is initialized in the Queue element
labelled `Spotters. When there is a spotter entity in the Queue labelled
Spotters and a loaded truck entity in the Queue labelled DumpQueue",
they get routed into the Combi labelled Dump", triggering the start of the
dumping activity. After the dumping activity, an entity representing an
empty truck is routed out into a Normal labelled Return" while another
entity that represents the 8.9 cubic yards of dumped dirt is routed into a
Queue labelled DumpedDirt". Also, an entity represented the spotter that
is now free, is routed into a Queue labelled Spotters". This makes the
spotter available for the next loaded truck arrival or those that are waiting.
The empty truck entity starts its return journey to the loading area, after
which it is routed into a Queue labelled Trucks" where it waits to begin its
next cycle.
The entity that represents the 8.9 cubic yards of dumped dirt is combined
with a dozer entity from the Dozers" Queue within the Spreading Combi.
The combined entity is held within this Combi element for 8.5 minutes (the
time required for the dozer to spread 8.9 cubic yards of dirt). Thereafter, an
entity that represents 8.9 cubic yards of spread dirt is released and routed into
Record intrinsic
Start Set TNOW = 0 statistical
observations
no
no
Transfer the
Generate a
3
earliest event
duration ∆
on the event list
(possibly
to the
stochastic) for
chronological
the activity
list
NOTES:
1. A Combi can
begin if all
preceding queue
Set TNOW to
Calculate the nodes contain at
the time of the
event time: least one entity.
transferred
TNOW + ∆
event 2. A Normal can
begin if any
preceding activity
has released an
entity.
3. In the case of a
tie, the earliest
event is
Record the Release entities
considered to be
event in the from the
the one that was
event list activity
scheduled (i.e.,
To begin, take a sheet of paper and at the top write headings for the
following columns: TNOW , Events, Chronological, Prod, and Util.
Note that the fourth and fth columns Prod and Util are not a part of
the simulation engine; instead we'll be using it to track the productivity of
our system (a non-intrinsic statistic) and the utilization of the loader (an
intrinsic statistic), respectively.
The rst step of the algorithm is to set TNOW = 0, so under the heading
TNOW write the number zero. Your paper should look something like this:
The next step of the algorithm asks whether an activity can begin. The
answer is yes; the Combi labelled Load can begin because entities are
present in both of its preceding queues. Let's assume that truck A goes
rst, so the entity representing truck A is moved to the Load activity to-
gether with the entity representing the loader. The duration of the Load
activity is 7 minutes and TNOW is currently 0, so the time at which the ac-
tivity will nish is TNOW + ∆ = 0 + 7 = 7. We record this under the Event
List column; the paper now looks like this:
We now move back to the question of whether an activity can begin. This
time the answer is no; truck B is available to be loaded in the Trucks queue,
but the Loader queue is empty because the loader is currently busy with
truck A. We now need to record statistics. In this case we're only concerned
with the utilization as it is intrinsic. As the loader is currently busy, we
record 100% in the Util column. Now we move to the next question: is the
event list empty? Again, the answer is no; the event we just recorded is in
the list. Next, we need to scan the event list for the earliest event, which is
easy as there is only one event. We copy this event into the Chronological
column and cross it out from the Events column. Finally, we need to set
TNOW to the time of this event, so we cross out the 0 in the TNOW column
and write a 7 underneath. Our sheet of paper now looks like this:
At this point, the task of loading truck A is complete and both the truck
and the loader are released from the Load activity. We now return to the
question: can an activity begin? This time the answer is yes; the Load
activity can begin (because the loader and truck B are available in their
respective queues) and the Travel activity can begin (because truck A was
just released from the Load activity). It does not matter which activity we
choose to deal with rst (the algorithm will produce the same results in either
case), so let's pick the Load activity. First, we move the entity representing
truck B and the entity representing the loader to the Load activity, and then
we calculate the event time, which is TNOW + ∆ = 7 + 7 = 14. We record this
event in the event list. Again we ask the question: can an activity begin?
The answer is yes, as we still need to deal with the Travel activity. The entity
representing truck A is now moved to the Travel activity, and the event time
is calculated to be TNOW + ∆ = 7 + 17 = 24. This event is also recorded
under the Event List column. Our sheet of paper now looks like this:
TNOW Events Chronological Prod Util
0 A loaded @ 7 100%
7 B loaded @ 14 A loaded @ 7
A arrives @ 24
Again, we ask the question: can an activity begin? This time the answer
is no; truck A is busy traveling and truck B is being loaded. We therefore
need to record statistics: the loader is still busy (this time with truck B),
so we record 100% in the Util column. Next, the event list isn't empty,
so we need to scan the list for the earliest event, which is the completion of
loading truck B. We cross this event out under the Events column and copy
it to the Chronological column, and then update TNOW to 14. Our sheet
of paper now looks like this:
TNOW Events Chronological Prod Util
0 A loaded @ 7 100%
7 B loaded @ 14 A loaded @ 7 100%
14 A arrives @ 24 B loaded @ 14
The Load activity is now complete and the entities representing truck
B and the loader are released. We return to the question: can an activity
begin? This time the answer is yes; truck B can begin the Travel activity.
We move the entity to the Travel activity and calculate the nish time as
TNOW + ∆ = 14 + 17 = 31. We record this new event under the Events"
column:
4.3. HAND SIMULATION 89
This time, when we ask if an activity can begin, the answer is no; both
trucks are in the process of travelling to the dump site. We record the
utilization of the loader (it's now idle) and scan the event list. We see that
the arrival of truck A at the dump site is the earliest event, so it is crossed
out from the Events" column and transferred to the Chronological" column,
and TNOW is updated to 24:
The Travel activity is now complete and the entity representing truck A
is released. At this point, truck A can begin the Dump activity. The nish
time for this event is TNOW + ∆ = 24 + 3 = 27, and an event is recorded:
There are no other activities that can begin at this time, so we record
the utilization and scan the event list for the earliest event. This turns out
to be the event we just scheduled, so that event is crossed out from the
Events" column and transferred to the Chronological column. TNOW is
then updated to 27 and the entity representing truck A is released. At this
point, the truck A entity passes through the production counter, and we
record the production in the Prod column (1 truckload in 27 minutes).
Once it has passed the production counter, truck A can begin the Return
activity. The nish time is TNOW + ∆ = 27 + 13 = 40, and the event is added
to the Events column. Our sheet of paper now looks like this:
90 CHAPTER 4. MODELLING WITH CYCLONE
Once again there are no other activities that can begin, and there are
still events to process. The loader continues to be idle, so we record that
under the Util column. The earliest event in the event list is the one we
just scheduled, so it is removed, transferred to the chronological list, and
TNOW is updated to 34. Truck B is now released from the Dump activity
and passes through the production counter. As with truck A, we record
the production in the production column (2 truckloads in 34 minutes). After
passing the production counter, truck B can begin the Return activity, which
has a nish time of TNOW + ∆ = 34 + 13 = 47. Once this event is added to
the event list, our paper looks like this:
TNOW Events Chronological Prod Util
0 A loaded @ 7 100%
7 B loaded @ 14 A loaded @ 7 100%
14 A arrives @ 24 B loaded @ 14 0%
24 B arrives @ 31 A arrives @ 24 0%
27 A dumped @ 27 A dumped @ 27 1/27 0%
31 A returns @ 40 B arrives @ 31 0%
34 B dumped @ 34 B dumped @ 34 2/34
B returns @ 47
4.3. HAND SIMULATION 91
Again, there are no other activities that can begin and there are still
events to process. The loader continues to be idle, so we record that under
the Util column. The earliest event is the return of truck A to the loading
site, so this event is removed, transferred to the chronological list, and TNOW
is updated to 40. Truck A is released from the Return activity and can now
begin the Load activity (because the loader is available), which has a nish
time of TNOW + ∆ = 40 + 7 = 47. Once this event is added to the event list,
our paper looks like this:
No further activities can begin and the event list still contains events, so
we need to record the utilization (100% this time as the loader is busy with
truck A) and scan the event list for the earliest event. This time the result is
a tie; both trucks are scheduled to complete their activities at time 47. We
need to make use of our tie-breaking procedure, which states that in the case
of a tie, the earliest event is the highest on the list (which will be the event
that was recorded rst). Thus, we will process the return of truck B to the
loading site rst. This event is removed from the event list, transferred to
the chronological list, and TNOW is updated to 47. Truck B is now released
from the Return activity; however, it cannot begin the Load activity as the
loader is still busy with truck A. Our paper now looks like this:
100 %
80 %
Utilization
60 %
40 %
20 %
0%
0 5 10 15 20 25 30 35 40 45
Simulation Time (min)
Z 47
1
f (x) dx = 100%×(14−0)+0%×(40−14)+100%×(47−40) ≈ 44.7%.
47 0
The next step of the algorithm is to ask the question: can an activity
begin? The answer is yes: the skid steer can supply stone for the labourers.
The entity representing the skid steer and the entity representing space for the
Six-Sided Die Roll
6 7.9 6 1.83
5 7.5 5 1.76
4 7.1 4 1.71
3 6.9 3 1.66
2 6.5 2 1.62
1 6.1 1 1.58
stones are moved to the resupply activity and a die roll is made to determine
the duration. The roll results in a 1, so the duration of the activity will be
6.10 minutes. The event time is therefore: TNOW + ∆ = 0.00 + 6.10 = 6.10.
We record this event under the Events column. The paper now looks like
this:
TNOW Events Chronological Stones
0.00 Resupply @ 6.10
At this point no further activities can begin: the skid steer is busy bring-
ing the rst load of stones, and the labourers are idle as they have no stones
to place. We therefore need to record that the number of stones available
to be placed is 0, and then scan the event list for the earliest event. The
resupply of stones at simulation time 6.10 is the only event, so it is crossed
out, transferred to the chronological list, and TNOW is updated to 6.10. The
paper now looks like this:
The entities are now released from the resupply activity. The skid steer
returns to its queue, and the available space is converted to 20 stones by the
Generate element and all 20 are queued for the labourers. We now return to
the question: can an activity begin? This time there are four activities that
can begin: each of the labourers can begin placing a stone and the skid steer
can begin supply of the next load of stones. To calculate the duration of
the placement activities, the die is rolled three times and the numbers 5, 1,
and 5 result. These correspond to durations of 1.76, 1.58, and 1.76 minutes,
respectively, and end event times of 7.86, 7.68, and 7.86. These three events
are added to the event list. Finally, the die is rolled again to determine the
duration of the resupply activity and the result is 2, so resupply will take 6.5
minutes and complete at simulation time 12.60. This event is also added to
the event list. The paper now looks like this:
At this point, labourer B can begin placing another stone. The die roll
to determine duration is 6, so placement of the stone will take 1.83 minutes
and nish at time 9.51. After this event is added to the event list, the paper
looks like this:
TNOW Events Chronological Stones
0.00 Resupply @ 6.10 0
6.10 Labourer A @ 7.86 Resupply @ 6.10 17
7.68 Labourer B @ 7.68 Labourer B @ 7.68
Labourer C @ 7.86
Resupply @ 12.60
Labourer B @ 9.51
35
Stones Available 30
25
20
15
10
5
0
0 2 4 6 8 10 12 14 16 18
Simulation Time (min)
Z 18.7
1 1
f (x) dx = × 214.43 ≈ 11.467.
18.7 0 18.7
complete ring, which takes the stresses from the ground and redistributes
them to the surrounding ground area. However, there are numerous issues
to address. The time the ground can remain stable upon excavation greatly
depends on the geotechnical conditions in the area. Dry sand, for example,
will not support itself and tends to collapse immediately upon excavation.
Hard rock may not require any shotcrete in the interim, and can be bolted
to provide temporary support. In general terms, tunnels will have a primary
liner (material like shotcrete or steel and lagging) to provide support during
construction. Then a secondary liner is used to provide the ultimate support.
Finally, for unstable ground, excavation and support are typically staged in
small sections to avoid leaving large areas unprotected and prone to failure.
This is called benched construction. The tunnel face shown in Figure 4.26
can be subdivided into sections. The top section in this gure is called
the top heading while the bottom section is called the bench/invert section.
We start excavating the top heading, advancing 1 or 2 meters and apply
shotcrete and temporary support at the bottom of the top heading (often
in the form of steel beams called lattice girders). Then, we excavate the
invert section, apply shotcrete to complete the ring, and repeat the process.
Dierent equipment is used depending on the size of the tunnel and the
ground conditions. For example, a backhoe can be used or a rock grinder,
depending on whether the material is soft or hard. The transportation of
the material is done using loaders and trucks. For smaller utility tunnels,
for example, smaller machines are used and often muck cars and a train are
utilized instead of loaders and trucks due to size restrictions and depth of
the tunnel.
The Federal Highway Administration (2009) provides a good summary
of the NATM method for the interested reader.
1. Complete the top section (heading) in two stages (each one is 1 meter
deep).
Stage 1:
4.4.
NORTH LRT CASE STUDY
Stage 2:
4.4.5 Results
The total time that it took to complete the simulation was 1,427.92 hours.
Productivity of the overall system was 0.18 meters per hour, or 1.44 meters
per 8-hour shift. On overage, it takes 329.52 minutes to complete one cycle,
which matches the value shown in Table 4.6, conrming that the model
is valid. With regards to resources in the model, while the mining crew
had no waiting time, the other resources waited a substantial amount of
time. The backhoe waited approximately 88% of the time, the truck waited
approximately 91% of the time, the surveyor waited approximately 89% of
the time, and the shotcrete machine waited on average 76% of the time.
The utilization of the resources, with the exception of the mining crew, was
therefore very low. See Figure 4.28 for a full statistics report.
between runs. Statistics for waiting time are similar to the previous model.
We therefore conclude that the model is still valid and that the assumption
regarding the constant duration is acceptable given the model's performance
(not changing much in the results as the variances are small).
Statistics Report
Date: Wednesday, January 14, 2015
Project: Model
Scenario: Scenario1
Run: 1 of 1
Non-Intrinsic Statistics
Element Mean Standard Observation Minimum Maximum
Name Value Deviation Count Value Value
Scenario1 (Termination Time) 85,675.200 0.000 1.000 85,675.200 85,675.200
Intrinsic Statistics
Element Mean Standard Minimum Maximum Current
Name Value Deviation Value Value Value
Backhoe (PercentNonEmpty) 0.877 0.328 0.000 1.000 1.000
Mining Crew (PercentNonEmpty) 0.000 0.000 0.000 1.000 0.000
Shotcrete (PercentNonEmpty) 0.761 0.426 0.000 1.000 0.000
Surveyor (PercentNonEmpty) 0.894 0.308 0.000 1.000 1.000
Truck (PercentNonEmpty) 0.913 0.281 0.000 1.000 1.000
Counters
Element Final Overall Average First Last
Name Count Productivity Interarrival Arrival Arrival
Counter 260.000 0.003 329.520 329.520 85,675.200
Waiting Files
Element Average Standard Maximum Current Average
Name Length Deviation Length Length Wait Time
Backhoe 0.877 0.328 1.000 1.000 287.953
Mining Crew 0.000 0.000 1.000 0.000 0.000
Shotcrete 0.761 0.426 1.000 0.000 83.587
Surveyor 0.894 0.308 1.000 1.000 293.698
Truck 0.913 0.281 1.000 1.000 300.003
minutes, a high value of 576 minutes, and a mode of 480 minutes. The
model is shown in Figure 4.29, with the breakdown cycle highlighted in red.
This model took a total of 1,592.25 hours to complete. The backhoe waited
on average 75% of the time, the truck waited on average 92% of the time,
the surveyor waited on average 89% of the time, and the shotcrete machine
waited on average 78% of the time. Both the crew and the breakdown did
not have any waiting time. Results for this remain similar to those seen in
the previous two models.
Chapter 5
General Purpose Modelling
Recall that simulation in the context of this book was dened as:
111
112 CHAPTER 5. GENERAL PURPOSE MODELLING
1. modelling elements,
2. entities,
Modelling elements vary from one simulation system to another, but most
simulation systems include elements that represent work tasks, queuing of
entities and collections of statistics.
Entities are virtual objects that are essential to modelling dynamic sys-
tems such the ones we are interested in. The entity may represent a customer
requiring service (e.g., a truck that requires loading); or a communication
message between various elements to regulate ow in the model (e.g., all pre-
cast material required to start installation has been delivered; send a signal
to the installation sub-model that installation can commence).
To build a model, we generally need to describe the life-cycle of the entity
as it navigates from one modelling element to the next in the model. In
general terms, when a model is created, one should be able to describe the
general work ow of the real construction process by simply following the
journey of the entity within the model.
Directional arrows describe the direction the entity follows in the model.
The entity originates from one modelling element and generally ows to an-
other element as per the direction of the arrow connecting the elements.
Containers, as the name implies, are used to hold information pertinent
to the model, but where entities generally do not go. An example of this
may be a container to hold statistics, which might be required to dene
what statistics need to be collected by other elements in the model. When
observations are collected by other elements, they are simply stored in this
container to analyze after the simulation is complete.
A dynamic process interaction simulation model similar to the ones of
interest to us in this book is created by virtue of creating entities, routing
them between dierent modelling elements over time, and observing and
recording the changes in the system, until the simulation stops. In essence,
we build a simulation model by virtue of creating entities and following their
lifecycle.
element (the circular element in the model) and congure it to create a truck
at the start of simulation. Next, we set up the elements required to model
the life cycle of the truck. First, the entity (truck) should load dirt. The
loading activity can be modelled with a Task" element (the square elements
in the model). The loading task has one server specied since we only have
one excavator. This type of constrained" task forces trucks to wait in a
queue if another truck is being served by the excavator at the time it arrives.
When the server becomes available, the trucks in the queue will be served
on a rst-come rst-served basis. Once the truck nishes loading, it passes
on to another Task" element, which models the travel of the truck to the
dump site. This task is not constrained by how many trucks are traveling,
and as such, it has an unlimited number of servers and there will be no
queuing at the task. We call this type of task unconstrained." Once the
travel is complete, the truck proceeds to another Task" element that models
the dumping process. Again, this task is unconstrained. Once dumping is
complete, the truck passes through a Counter" element (the small circle
with a ag on top), that records production (a truckload of 16 m3 has been
produced). After passing through the counter, the truck passes on to another
unconstrained Task" element that models the return trip. Finally, the truck
is routed back to the loading task to begin another cycle.
Quantity (input): The total number of entities to create. Once the element
has created this many entities it will cease introducing entities into the
model.
Created (output): The total number of entities that were created during
simulation. Note that it is possible for this number to be less than the
value of the Quantity property if, for example, the simulation termi-
nated early. It will never be larger than the Quantity property.
Duration (input): The duration of the activity. The time can be constant,
random, or a mathematical function as requiredthough you should
make sure that the value is never negative. Be especially wary of prob-
ability distributions that are unbound below (the normal distribution
for example). Simphony will issue a warning if you specify such a dis-
tribution.
Initial (input): The initial value of the Counter. This property is generally
set to zero.
Step (input): The amount the Counter should be incremented with each
passing entity. Normally, this property is set to 1; however, it is some-
times useful to set it to a value that represents the capacity" of the
passing entity. For example, in an earthmoving model it could be set to
the capacity of the truck (16 m3 ) so that the counter is counting cubic
meters of dirt delivered rather than the truck cycles. It is possible to
set this property to a negative value so that the counter is counting
down.
118 CHAPTER 5. GENERAL PURPOSE MODELLING
Count (output): The number of entities that passed through the Counter
during simulation.
Time (output): The simulation time at which the most recent passing en-
tity was observed. Note that this need not be the time at which simu-
lation nished, although if the Counter was responsible for terminating
simulation (i.e., the limit was reached), it will be.
so it should take
When the model is simulated inside Simphony, the production counter re-
ports that 24,987 minutes elapsed before the nal cubic meter was observed.
The production counter also reports that the overall productivity of the sys-
tem was 0.400 m3 per minute which works out to 24 m3 per hour. These
results from Simphony conrm that the model is accurate and logical to fol-
low, but we can note that all we have done is added up the times it takes
to complete each of its tasks! This leads us to a question: do we actually
need a simulation model for this process, or would spreadsheet calculations
be sucient? The answer will become evident as we return to this scenario
throughout the chapter.
For now, let's make our model more realistic. Suppose that we have 7
trucks in our process (all the same size for now16 m3 ), and one excavator
for excavation and loading. Although we can still manage to compute the
production time using simple arithmetic, we will now have to account for
queuing at the excavator which may complicate matters (we recommend
that the reader attempt this calculation to verify the potential benets).
According to the data obtained at the site, the loading time is 7 minutes and
the average back cycle time is 33 minutes. The total quantity of dirt that
should be removed is 10,000 m3 or 625 truckloads.
The simulation model above can be quickly adjusted to reect the new
situation. This is achieved by simply changing the properties of the aected
elements in the model. In this case, the Create element needs to be recon-
gured to create 7 entities (trucks) at the start of simulation instead of just
one. This time, when the simulation is run, the results are as follows:
Final simulation time: 4395 min = 73.25 hrs.
120 CHAPTER 5. GENERAL PURPOSE MODELLING
As before, the simulation time and overall productivity are reported by the
production counter. The remaining statistics are reported by the loading
task. Using this simple model, we can investigate many aspects of the pro-
cess, including estimating production rates, balancing equipment, and study-
ing the impact of various factors on production.
We shall make one last improvement before we leave the simple intro-
ductory model we've created. Construction activities are generally uncertain
in their timing. The back cycle time of the truck will not always be 33
minutes, even for the same route. We can model variability in duration of
work tasks using probability distributions. In our example, we can use an
exponential distribution with a mean of 7 minutes to model the loading task,
while the travel time may have a triangular distribution with a minimum of
14 minutes, a maximum of 22 minutes, and a most likely value of 17 min-
utes (similar to a PERT duration estimate). We'll leave the duration of the
dumping and return tasks at the constant values for now. This gives us a
more accurate representation of our real construction process (although still
simplied). The results from the revised model are shown below:
2.0
1.5
1.0
0.5
0.0
0 1,000 2,000 3,000 4,000
Simulation Time (min)
Note that because the simulation model is no longer deterministic, the results
would almost certainly be dierent if the model were run again.
Simulation models are fairly rich in information related to the process. For
example, the production counter tracks production throughout the simulated
time. The chart in Figure 5.3 shows how the production rate changed as
the simulation progressed. The chart demonstrates that it took the process
roughly 600 minutes (a little over 10 hours) to reach a steady state of around
2.05 m3 per minute.
causes the state of the simulation system to change. For example, an event
might be a truck in an earthmoving simulation arriving at the dump site,
thus causing the state of the truck to change from hauling" to dumping";
it may be a welder in a pipe spool fabrication model beginning work on a
spool, thus changing the state of the spool from tting" to welding" and,
at the same time, causing the welder to become unavailable to other spools.
An entity is the primary object associated with an event. In these examples,
the entities are the truck and the spool, respectively. Simulation time is the
time at which events occur.
A discrete event simulation engine is responsible for scheduling and pro-
cessing these events. Scheduling events is the process of the simulationist
informing the simulation engine of precisely when an event will occur. To do
this, the simulationist needs to tell the simulation engine three things:
1. The entity associated with the event (e.g., the particular truck or spool
to which the event applies),
2. What event is going to occur (e.g., a truck will arrive at the dump site
or a welder will become available to work on a spool), and
The processing of an event happens when the simulation engine advises the
simulationist that the time has come for a previously scheduled event to
occur. When an event is processed, the simulation engine will tell you the
same three pieces of information that were specied at the time the event
was scheduled, namely, what event is being processed, the entity associated
with the event, and the simulation time. In response to this information, the
simulationist will typically update the state of the system and/or schedule
further events. Note that it is not permissible to schedule an event with a
simulation time prior to the event being processed (i.e., time doesn't run
backwards!).
In order to accomplish these responsibilities, a discrete event simulation
engine requires two things: a list of scheduled events (ordered by simulation
time), and a simulation clock. The list of scheduled events keeps track of
those events that have been scheduled but not processed. When an event is
scheduled it is inserted into the list at the correct location, and when it is
processed it is removed from the list. The simulation clock keeps track of the
5.2. HAND SIMULATION 123
Record intrinsic
Start Set TNOW = 0 statistical
observations
no
no
Transfer the
Generate a
2
earliest event
duration ∆
on the event list
(possibly
to the
stochastic) for
chronological
the activity
list
Set TNOW to
Calculate the
the time of the
event time: NOTES:
transferred
TNOW + ∆
event 1. A Task can begin
if a prior
modelling
element has
released an entity.
2. In the case of a
tie, the earliest
event is
Record the Release entities
considered to be
event in the from the
the one that was
event list activity
scheduled (i.e.,
The next step of the algorithm asks whether a task can begin. The answer
is yes; both trucks can begin the loading task as they have just been created.
Let's assume that truck A goes rst, so it is moved to the loading task. The
duration of this task is 7 minutes and TNOW is currently 0, so the time at
which the task will nish is TNOW + ∆ = 0 + 7 = 7. We record this under
the Events" column; the paper now looks like this:
We now move back to the question of whether a task can begin. This time
the answer is no; truck A is busy being loaded, and truck B cannot begin the
loading task, because the task is constrained and the only available server is
busy with truck A. We therefore move to the next question: is the event list
empty? Again, the answer is no; the event we just recorded is in the list. We
now need to scan the event list for the earliest event, which is easy as there
is only one event. We copy this event into the Chronological List" column
and cross it out from the Events" column. Next, we need to set TNOW to
the time of this event, so we cross out the 0 in the TNOW " column and write
a 7 underneath. Our sheet of paper now looks like this:
At this point, the task of loading truck A is complete and the truck
is released from the loading task. We now return to the question: can a
task begin? This time the answer is yes; truck A can begin the travel task
and truck B can begin the loading task. It does not matter which truck
we choose to deal with rst (the algorithm will produce the same results
in either case), so let's pick truck B. First, we move the entity representing
truck B to the loading task, and then we calculate the nish time, which is
TNOW + ∆ = 7 + 7 = 14. We record the event in the event list. Again we ask
the question: can a task begin? The answer is yes, as we still need to deal
with truck A traveling. The entity representing truck A is now moved to the
travel task, and the nish time is calculated to be TNOW + ∆ = 7 + 17 = 24.
This event is also recorded under the Events" column. Our sheet of paper
now looks like this:
5.2. HAND SIMULATION 127
Again, we ask the question: can a task begin? This time the answer is no;
truck A is busy traveling and truck B is being loaded. In addition, the event
list isn't empty so we need to scan the list for the earliest event, which is the
completion of loading truck B. We cross this event out under the Events"
column and copy it to the Chronological" column. In addition, we update
TNOW to 14. Our sheet of paper now looks like this:
TNOW Events Chronological Prod
0 A loaded @ 7
7 B loaded @ 14 A loaded @ 7
14 A arrives @ 24 B loaded @ 14
The loading task is now complete and the entity representing truck B is
released. We return to the question: can a task begin? This time the answer
is yes; truck B can begin the travel task. We move the entity to the travel
task and calculate the nish time as TNOW + ∆ = 14 + 17 = 31. We record
this new event under the Events" column:
TNOW Events Chronological Prod
0 A loaded @ 7
7 B loaded @ 14 A loaded @ 7
14 A arrives @ 24 B loaded @ 14
B arrives @ 31
This time, when we ask if a task can begin, the answer is no; both trucks
are in the process of traveling to the dump site. Scanning the event list, we
see that the arrival of truck A at the dump site is the earliest event, so it is
crossed out from the Events" column and transferred to the Chronological"
column. TNOW is then updated to 24:
TNOW Events Chronological Prod
0 A loaded @ 7
7 B loaded @ 14 A loaded @ 7
14 A arrives @ 24 B loaded @ 14
24 B arrives @ 31 A arrives @ 24
The travel task is now complete and the entity representing truck A is
released. At this point, truck A can begin the dump task. The nish time
for this task is TNOW + ∆ = 24 + 3 = 27, and an event is recorded:
128 CHAPTER 5. GENERAL PURPOSE MODELLING
There are no other tasks that can begin at this time, so the event list is
scanned for the earliest event. This turns out to be the event we just sched-
uled, so that event is crossed out from the Events" column and transferred
to the Chronological" column. TNOW is then updated to 27 and the entity
representing truck A is released. At this point, the truck A entity passes
through the production counter, and we record the production in the Prod"
columnwe've moved 1 truckload in 27 minutes. Once it has passed the
production counter, truck A can begin the return task. The nish time is
TNOW + ∆ = 27 + 13 = 40, and the event is added to the Events" column.
Our sheet of paper now looks like this:
As no other tasks can begin and the event list is not empty, we need
to scan the event list for the earliest event. This is the arrival of truck B
at the dump site. The event is removed from the event list, added to the
chronological list, and TNOW is updated to 31. Truck B is now released
from the travel task and it can begin the dump task. The nish time is
TNOW + ∆ = 31 + 3 = 34, and the event is added to the event list. Our sheet
of paper now looks like this:
Once again, there are no other tasks that can begin, and there are still
events to process. The earliest event in the event list is the one we just
scheduled, so it is removed, transferred to the chronological list, and TNOW
is updated to 34. Truck B is now released from the dump task and passes
through the production counter. As with truck A, we record the production
in the production columnwe've now managed to move 2 truckloads in 34
minutes. After passing the production counter, truck B can begin the return
task, which has a nish time of TNOW + ∆ = 34 + 13 = 47. Once this event
is added to the event list, our paper looks like this:
Again, there are no other tasks that can begin and there are still events
to process. The earliest event is the return of truck A to the loading site,
so this event is removed, transferred to the chronological list, and TNOW is
updated to 40. Truck A is released from the return task and can now begin
the load task, which has a nish time of TNOW + ∆ = 40 + 7 = 47. Once this
event is added to the event list, our paper looks like this:
No further tasks can begin and the event list still contains events, so we
need to scan it for the earliest event. This time the result is a tie; both trucks
are scheduled to complete their tasks at time 47. We need to make use of
our tie-breaking procedure, which states that in the case of a tie, the earliest
130 CHAPTER 5. GENERAL PURPOSE MODELLING
event is the highest on the list (which will be the event that was recorded
rst). Thus, we will process the return of truck B to the loading site rst.
This event is removed from the event list, transferred to the chronological
list, and TNOW is updated to 47. Truck B is now released from the return
task; however, it cannot begin the load task, as the load task is constrained
and the only server is still busy with truck A. Our paper now looks like this:
TNOW Events Chronological Prod
0 A loaded @ 7
7 B loaded @ 14 A loaded @ 7
14 A arrives @ 24 B loaded @ 14
24 B arrives @ 31 A arrives @ 24
27 A dumped @ 27 A dumped @ 27 1/27
31 A returns @ 40 B arrives @ 31
34 B dumped @ 34 B dumped @ 34 2/34
40 B returns @ 47 A returns @ 40
47 A loaded @ 47 B returns @ 47
At this point no further tasks can begin. There is only one event in
the event list (the completion of loading truck A), so it is transferred to
the chronological list. TNOW does not need to be updated as its value has
been updated to 47 previously, but we transfer its value to the next row
nevertheless. Truck A is released from the loading task and two tasks can
now begin: the loading of truck B and the traveling of truck A. Once these
tasks are scheduled, our sheet of paper will look like this:
TNOW Events Chronological Prod
0 A loaded @ 7
7 B loaded @ 14 A loaded @ 7
14 A arrives @ 24 B loaded @ 14
24 B arrives @ 31 A arrives @ 24
27 A dumped @ 27 A dumped @ 27 1/27
31 A returns @ 40 B arrives @ 31
34 B dumped @ 34 B dumped @ 34 2/34
40 B returns @ 47 A returns @ 40
47 A loaded @ 47 B returns @ 47
47 B loaded @ 54 A loaded @ 47
A arrives @ 64
TNOW increased by 7, sometimes by 10, and once it didn't increase at all (since
there were two events to process with the same simulation time). This is
entirely characteristic of discrete event simulation. Second, the chronological
list provides us with a story line of what happened during simulation. Even
though the various events were not scheduled in chronological order, the
algorithm ensures that they are processed in the correct order.
Which means that we are going to need six columns: the three required to
emulate the simulation engine, and three more to track the statistics. As
before, we begin by writing a 0 under the TNOW column:
Next, we ask whether an activity can begin; the answer is yes, the rst
truck can arrive. The nish time for this activity is TNOW + ∆ = 0 + 0 = 0.
When this event has been added to the Events" column, our sheet of paper
looks like this:
132 CHAPTER 5. GENERAL PURPOSE MODELLING
At this point, no other activities can begin, so we scan the event list
for the earliest event. The only event on the event list is the one we just
scheduled: the arrival of the rst truck. We now cross out this event from
the Events" column and add it to the Chronological" column, and then
update TNOW to the time of the event. At this point, the rst truck begins
loading, so the batch plant is not utilized. In addition, there are no trucks
in the queue waiting, and the rst truck did not have to wait to be loaded.
We record these observations in the statistical columns. When we're done,
our sheet of paper will look like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0
0 T1 arrives @ 0 100% 0 0
The loading of the rst truck has begun, so we need to schedule an event
for it. The nish time for the event is: TNOW + ∆ = 0 + 16 = 16. In addition,
we need to schedule the arrival of the next truck. The nish time for that
event is: TNOW + ∆ = 0 + 5 = 5. With these two events added, our paper
looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
T2 arrives @ 5
No further activities can begin, so we scan the event list for the earliest
event. This is the arrival of the second truck at simulation time 5. We
transfer this event to the chronological list, and update TNOW to 5. This
time, the truck cannot begin loading immediately as the batch plant is still
busy with the rst truck, so it will have to wait, and our queue grows to a
length of 1. After updating the statistics, our paper looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1
The next activity to begin is the arrival of the third truck. The nish
time for this event is TNOW + ∆ = 5 + 5 = 10. Once this event is scheduled,
our paper looks like this:
134 CHAPTER 5. GENERAL PURPOSE MODELLING
No further activities can begin, so we need to scan the event list for the
earliest event. This turns out to be the arrival of the third truck, so we
transfer that event to the chronological list and update TNOW to 10. As with
the second truck, the third truck cannot begin loading, so it is queued and
our queue grows to a length of 2. After updating the statistics, our paper
looks like this:
At this point, we need to schedule the arrival of the fourth truck. The
nish time for this event is TNOW + ∆ = 10 + 16 = 26. Once this event is
scheduled, our paper looks like this:
No further activities can begin, so we scan the event list for the earliest
event, which is the departure of the rst truck. We transfer this event to
the chronological list, and update TNOW to 16. Now that it has departed,
the rst truck is no longer of interest to us. The second truck, on the other
hand, is. It can now begin loading, which causes our queue to decrease in
length by 1; in addition, a glance at the chronological list shows us that the
second truck arrived at time 5 and it is now time 16, so the truck waited in
the queue for 16 − 5 = 11 minutes. After recording these statistics, our paper
looks like this:
5.2. HAND SIMULATION 135
As the second truck has begun loading, we should schedule the completion
of this activity. The nish time is TNOW + ∆ = 16 + 15 = 31. With this
event scheduled, our paper looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1
10 T3 arrives @ 10 T3 arrives @ 10 100% 2
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
T2 departs @ 31
Once again, no activities can begin: the second truck is still loading, and
the fourth truck is yet to arrive. We need to scan the event list for the earliest
event, which turns out to be the arrival of the fourth truck. We transfer this
event to the chronological list and update TNOW to 26. As the batch plant
is busy with the second truck, the fourth truck cannot begin loading, so it is
queued and our queue grows to a length of 2. After updating the statistics,
our paper looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1
10 T3 arrives @ 10 T3 arrives @ 10 100% 2
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2
We now need to schedule the arrival of the fth truck. This will happen
at TNOW + ∆ = 26 + 38 = 64. With this event scheduled, our paper looks
like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1
10 T3 arrives @ 10 T3 arrives @ 10 100% 2
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2
T5 arrives @ 64
136 CHAPTER 5. GENERAL PURPOSE MODELLING
No further activities can begin, so we scan the event list for the earliest
event, which is the departure of the second truck. We transfer this event to
the chronological list, and update TNOW to 31. After the second truck has
departed and left the system, the third truck can begin loading. This causes
our queue to decrease in length by 1 and, in addition, the chronological list
shows us that the third truck arrived at time 10 and it is now time 31, so the
truck waited in the queue for 31 − 10 = 21 minutes. After recording these
statistics, our paper looks like this:
As the third truck has begun loading, we should schedule the completion
of this activity. The nish time is TNOW + ∆ = 31 + 10 = 41. With this
event scheduled, our paper looks like this:
No further activities can begin, so we scan the event list for the earliest
event, which is the departure of the third truck. We transfer this event to
the chronological list, and update TNOW to 41. After this truck has left
the system, the fourth truck can begin loading. This causes our queue to
decrease in length by 1 and, in addition, the chronological list shows us that
the fourth truck arrived at time 26 and it is now time 41, so the truck waited
in the queue for 41 − 26 = 15 minutes. After recording these statistics, our
paper looks like this:
5.2. HAND SIMULATION 137
As the fourth truck has begun loading, we need to schedule its departure.
This will happen at TNOW + ∆ = 41 + 13 = 54. With this event scheduled,
our paper looks like this:
No further activities can begin, so we scan the event list for the earliest
event, which is the departure of the fourth truck. We transfer this event
to the chronological list, and update TNOW to 54. This time, there are no
further trucks in the queue, so our batch plant becomes idle. Our paper now
looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1
10 T3 arrives @ 10 T3 arrives @ 10 100% 2
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0
Once again, no further activities can begin, and the only event on the
event list is the arrival of the fth truck. We transfer this event to the
138 CHAPTER 5. GENERAL PURPOSE MODELLING
chronological list and update TNOW to 64. As the batch plant is currently
idle, the fth truck can begin loading immediately and does not need to be
queued. After updating the statistics, our paper looks like this:
As the fth truck has begun loading, we need to schedule its departure.
The nish time for this event is TNOW + ∆ = 64 + 21 = 85. We also need
to schedule the arrival of the sixth and nal truck. The nish time for that
event is TNOW + ∆ = 64 + 13 = 77. With these two events added, our paper
looks like this:
At this point, no further activities can begin, so we scan the event list
for the earliest event. This is the arrival of the sixth truck at time 77. We
transfer this event to the chronological list and update TNOW to 77. As the
batch plant is busy with the fth truck, the new truck is forced to queue.
After updating the statistics, our paper looks like this:
5.2. HAND SIMULATION 139
As the sixth truck has begun to load, we need to schedule its departure.
This will happen at TNOW + ∆ = 85 + 15 = 100. With this event added, our
paper looks like this:
140 CHAPTER 5. GENERAL PURPOSE MODELLING
No further events can begin, so we scan the event list for the earliest event.
The only event on the list is the departure of the sixth truck, so we transfer
that event to the chronological list and update TNOW to 100. As there are
no other trucks to service, the batch plant is now idle. After updating the
statistics, our paper looks like this:
TNOW Events Chronological Util Queue Wait
0 T1 arrives @ 0
0 T1 departs @ 16 T1 arrives @ 0 100% 0 0
5 T2 arrives @ 5 T2 arrives @ 5 100% 1
10 T3 arrives @ 10 T3 arrives @ 10 100% 2
16 T4 arrives @ 26 T1 departs @ 16 100% 1 11
26 T2 departs @ 31 T4 arrives @ 26 100% 2
31 T5 arrives @ 64 T2 departs @ 31 100% 1 21
41 T3 departs @ 41 T3 departs @ 41 100% 0 15
54 T4 departs @ 54 T4 departs @ 54 0% 0
64 T5 departs @ 85 T5 arrives @ 64 100% 0 0
77 T6 arrives @ 77 T6 arrives @ 77 100% 1
85 T6 departs @ 100 T5 departs @ 85 100% 0 8
100 T6 departs @ 100 0% 0
At this point, no other activities can begin and a scan of the event list
shows that it is empty. The simulation has come to an end.
100 %
80 %
Utilization
60 %
40 %
20 %
0%
0 20 40 60 80 100
Simulation Time (min)
If we denote this step function by f , then the average utilization will be the
area under f divided by the total simulation time:
100
1 × (54 − 0) + 0 × (64 − 54) + 1 × (100 − 64)
Z
1
f (x) dx = = 90%.
100 0 100
Queue Length
1
0
0 20 40 60 80 100
Simulation Time (min)
If we denote this function by g , then the average length of the queue will be
the area under g divided by the total simulation time:
Z 100
1
f (x) dx = 0.55 trucks.
100 0
0 + 11 + 21 + 15 + 0 + 8
≈ 9.17 min.
6
2. Activities with variable durations: Activities never take the exact same
amount of time every time they're performed. In reality, they take a
varying amount of time that depends on a large number of factors. In
discrete event simulation, activity durations are traditionally modelled
using probability distributions.
4. Shared resources (i.e., resources that have not been exclusively assigned
to a particular activity): In our earthmoving example, the excavator
may be required for activities other than loading trucks, in which case,
trucks will be forced to wait if the excavator is engaged elsewhere.
Taking all of these issues into account would make a mathematical solution
impracticable. However, it is relatively easy for a simulation model to take
all of these factors into account. Let's look at an example of a problem in
which many of these issues turn up.
144 CHAPTER 5. GENERAL PURPOSE MODELLING
meters of progress, the track and the utility connections to the TBM need
to be extended. This operation takes 4 hours, and no other work can be
performed during this time; i.e., no trains can be in the tunnel and the TBM
must be idle. Second, after every 90 meters of forward progress, surveying
must take place to ensure that the TBM is not o course. It takes 8 hours to
complete the surveying and, as with track extension, no other work can be
done while it is taking place. At points where surveying and track extension
are both required, surveying takes precedence.
Modelling Strategy
The rst step in creating any discrete event simulation model is deciding
what the entities owing through the model will represent. In the tunnelling
operation described above, it is the trains that move from place to place,
so we will dene our entity to represent a train. Next, we need to examine
the process we're modelling and identify the resources. In our case there are
three: the TBM, the crane, and the track. Finally, in order to begin the
modelling process, we need to identify a portion of the problem that is small
enough for us to understand without becoming overwhelmed. The portion
of the tunnelling problem we'll tackle rst is the train cycle.
The train cycle is the journey made by the train. First it travels out to
the TBM, where the liners are unloaded and the muck cars are lled with
spoil. Then, it returns to the undercut, where the spoil is unloaded by the
crane and a new set of liners is loaded. Finally, the train waits to begin its
next trip out to the TBM. Figure 5.11 shows what the train cycle might look
like as a General Purpose model.
InUse (output): The number of servers currently in use, i.e., the number
of servers currently serving an entity. The number of servers available
plus the number of servers in use will equal the total number of servers.
servers currently available. Suppose that the entity at the head of the
queue requires two servers to perform its work and the next entity
requires only one. If a single server of that resource became available,
what should happen? Should the server sit idle waiting for a second
server to become available so that the rst entity can be served, or
should it be assigned to serve the second entity immediately? The
IsBlocking property species how this decision will be made. If the
property is set to True, the former option will be taken, and if set to
False, the latter. The default value is False.
Priority (input): Denes the order in which a Resource will check con-
nected Files for entities. When one (or more) of the servers belonging
to a Resource element becomes available, the Resource element will at-
tempt to nd an entity that can make use of it. It does this by polling
(checking) each File element it is connected to, to see if an entity is
waiting for a server. The order in which File elements are polled is
controlled by their priority. File elements with higher priority will be
checked rst. The default value is 0.
In the model under discussion, there are three resource elements: one
to represent the TBM, one to represent the crane, and one to represent the
track. Each of these resources is connected to a File element that represents
the queue in which trains will wait for the corresponding resource. The
number of servers dened at each resource is 1, since there is precisely one
TBM, crane, and track in the system we're modelling. The IsBlocking and
Priority properties of the le elements are left at their default values of False
and 0, respectively.
5.3. MODELLING PRODUCTION SYSTEMS 149
Once our resources and the corresponding les have been dened, we
need a way for an entity to make use of them to perform an activity. The
modelling elements used to achieve this are called Capture and Release.
Resource (input): The name of the Resource element from which the en-
tity requires servers. This property is required. Failing to provide a
value will result in an error being issued when you attempt to run the
model.
Servers (input): The number of servers the entity requires. Normally, this
will be left at the default value of 1; however, larger numbers (or a
formula) are permitted. If you set this property to a value greater than
the total number of servers available at the Resource, a warning will
be issued when you attempt to run the model.
File (input): The name of the File element in which the entity should wait
if the requested servers are unavailable. This property is required.
Failing to provide a value will result in an error being issued when you
attempt to run the model.
Resource (input): The name of the Resource element to which the entity
will be releasing servers. This property is required. Failing to provide
a value will result in an error being issued when you attempt to run
the model.
Upon arrival at the TBM, the train entity attempts to capture a server of
the TBM resource. Once the TBM has been captured, the excavation process
begins, and when this process is complete the TBM is released and the train
entity begins the return journey. At present, we have modelled operations at
the TBM in a simplistic fashion that does not match the process described
above. We will improve the model later, but for now, we will simply assume
that the process of excavating 1 meter of tunnel and unloading the concrete
liners from the train is unconstrained and takes a total of 30 minutes. Notice
also that we don't need to model the TBM as a shared resource here. We
could have modelled the excavation process as a constrained task with a single
server and achieved the same result. However, we know from the process
description that the TBM has other duties (e.g., lining and resetting) that
we'll be adding to our model in the future, so from this perspective it makes
sense to use a shared resource.
The Task element that models the return journey of the train is much the
same as the one modelling the journey to the TBM. As before, the distance
the train needs to travel increases as the TBM advances. We'll discuss the
mathematical formula required to model this in a moment.
Once the train returns to the undercut, it releases the server of the track
resource it captured earlier. When this happens, the other train (which has
been waiting the entire time) captures that server and begins its journey out
to the TBM. Meanwhile, the rst train attempts to capture the crane and,
once it obtains the crane's services, begins the processes of unloading the
spoil and loading a fresh set of liners. These tasks are both unconstrained
and have durations of 15 and 6 minutes, respectively. Once loading of the
liners is complete, the train releases the crane and attempts to capture the
track so that it can begin its next outbound journey.
Chainage." This Counter is initially set to zero, and every time a train
completes the excavation activity, it is incremented by one. Thus, its current
count represents the distance the TBM has excavated at any given time
during simulation. Note that we also make use of this element to terminate
the simulation: its Limit property is set to 1227 (the length of the tunnel as
specied in the problem description).
To solve the second problem, we need to write a formula in Visual Basic.
The Duration property of the Task element provides access to Visual Basic
via the builder button in the Property Grid, as shown in Figure 5.12.
We'll discuss formulas and the Formula Editor in more detail in a later
chapter, but for now, it's enough to understand that in Simphony, a formula
is very similar to a mathematical function: it takes an input and it produces
an output. In the case of the Duration of a Task, the input (named Element )
is the Task element itself, and the output is a numeric value indicating how
long a passing entity should be delayed. To calculate the duration in our
case, we need to take the current value of the counter (which is measured in
meters) and divide by 70 (the speed of the train in meters per minute). The
Visual Basic code to do this is shown below:
Public Partial Class Formulas
Public Shared Function Formula (...) As System . Double
Return Count (" Chainage ") / 70.0
End Function
End Class
5.3. MODELLING PRODUCTION SYSTEMS 153
Let's examine this formula line by line. The rst and last lines dene a class
that will contain not only this formula, but all other formulas used by the
model. These two lines will be present in every formula you write, and should
never be modied. All of your Visual Basic code will be placed inside this
class denition. Next, the second and fourth lines dene the function that
represents our formula. As with the class denition, these two lines will be
present in every formula you write, and should never be modied. Unlike
the class denition, however, they will vary between formulas. In particular,
the return type of the formula can change. The return type of the formula
above is System.Double, which means a numeric (oating-point) value. This
makes sense since the formula is supposed to calculate the duration of the
Task, which is a numeric value. Henceforth, whenever we discuss formulas
in this book, we will omit these four lines.
The most important line for us is the third, the line that performs the
actual calculation. The line begins with the Return statement, which is
a special statement in Visual Basic that indicates that what follows is the
return value of the formula, and that processing of the formula is over. Next,
we use the Count function to get the distance (in meters) the TBM has
excavated so far. The Count function takes as a parameter the name of a
Counter element, and it returns the current count at that element. In our
case, we pass it the text-literal Chainage," which is the name of the Counter
tracking the excavation distance. Finally, we divide by 70 (the speed of the
train in meters per minute) to get the number of minutes required for the
train to travel to the TBM.
The formula for the return trip is similar, but this time we divide by 60,
as the train is somewhat slower on the return trip:
Return Count (" Chainage ") / 60.0
Interpretation (input): A value that gives a general idea about what in-
formation the statistic is tracking. Possible values include Cost, Cycle
Time, Production Rate, and Utilization. The default value is Generic
(i.e., uninterpreted). It is always a good idea to select a suitable in-
terpretation, as this will allow the Statistic element to produce charts
that are tailored to that particular application.
5.3. MODELLING PRODUCTION SYSTEMS 155
Statistics Report
Date: Sunday, January 25, 2015
Project: Model
Scenario: Scenario1
Run: 1 of 1
Non-Intrinsic Statistics
Element Mean Standard Observation Minimum Maximum
Name Value Deviation Count Value Value
Scenario1 (Termination Time) 60,090.864 0.000 1.000 60,090.864 60,090.864
Counters
Element Final Overall Average First Last
Name Count Productivity Interarrival Arrival Arrival
Chainage 1,227.000 0.020 48.989 30.000 60,090.864
Resources
Element Average Standard Maximum Current Current
Name Utilization Deviation Utilization Utilization Capacity
Crane 42.8 % 49.5 % 100.0 % 0.0 % 1.000
TBM 61.3 % 48.7 % 100.0 % 100.0 % 1.000
Track 100.0 % 0.0 % 100.0 % 100.0 % 1.000
Waiting Files
Element Average Standard Maximum Current Average
Name Length Deviation Length Length Wait Time
CraneQ 0.000 0.000 1.000 0.000 0.000
TrackQ 0.572 0.495 1.000 1.000 27.969
TrainQ 0.000 0.000 1.000 0.000 0.000
Figure 5.15: General Purpose Model of the Train Cycle with Statistic
140
Cycle Time (min)
120
100
80
60
40
0 10,000 20,000 30,000 40,000 50,000 60,000
Simulation Time (min)
Note that if an entity that has captured resources ows into this element, then
the original entity that ows out of the upper branch will remain associated
with the resources, while the clones that ow out of the lower branch will
not. It is important therefore, not to release the resources from a clone as
this will cause an error during simulation.
Quantity (input): The number of entities that must arrive via the lower
branch before a blocked entity will be released. The element's graphic
on the Modelling Surface indicates the value of this property on the
lower branch.
Excavation Rate
In the problem description, the rate at which the TBM advances is stochastic
and depends on the type of soil being excavated (which in turn is dependent
on the chainage). The formula we'll use to calculate the duration for the
excavation activity is as follows:
Select Case Count (" Chainage ")
Case Is < 772.0 ' Sandy
Return 60.0 / SampleBeta (5.2 , 3.7 , 1.5 , 2.8)
Case Is < 1036.0 ' Heavy Clay
Return 60.0 / SampleBeta (5.9 , 4.3 , 0.6 , 1.1)
Case Else ' Sandy
Return 60.0 / SampleBeta (5.2 , 3.7 , 1.5 , 2.8)
End Select
This formula utilizes Visual Basic's Select Case statement to break the cur-
rent chainage down into the three ranges specied in the problem statement.
Then, once the soil conditions are known, a random deviate is generated from
the appropriate distribution. This random deviate is expressed in meters per
hour, so some further calculation is required to determine the number of
minutes required to excavate 1 meter (the length of a liner segment); hence,
5.3. MODELLING PRODUCTION SYSTEMS 161
we take 60 min/h and divide it by the random deviate. The result is the
duration of the excavation activity in minutes.
resource, it will be assigned to the second clone. Note how this ensures that
the lining activity cannot begin until both the excavation activity and the
unloading activity have been completed. Moreover, the next train entity ar-
riving at the tunnel face will not be able to begin the excavation or unloading
activities until the second clone releases the TBM resource.
Once the second clone captures the TBM resource, it proceeds to the
lining activity and resetting activities, which are both unconstrained with
durations of 24 minutes and 15 minutes, respectively. Having completed
these two activities, it releases the TBM resource and is destroyed.
As we should expect, when the model is run this time, the results are quite
dierent. This time, the statistics report tells us that the utilization of the
crane and the TBM is 28.1% and 100%, respectively, and that the average
waiting time for them is 0 and 19.994 minutes, respectively1 . Clearly, the
more accurate model shows that the TBM is a system bottleneck. Figure 5.18
shows a graph of train waiting time vs. simulation time. The graph shows
that the amount of time trains wait for the TBM falls as simulation proceeds
(which is what we would expect as the amount of time trains spend traveling
increases).
The production curve and train cycle time shown in Figures 5.19 and 5.20,
respectively, are also dierent from our earlier results. The production curve
no longer has any curvature. This is because the process is no longer limited
by the travel time of the trains; the production rate of the TBM has become
the limiting factor. Also, the portion of the curve that has a slightly lower
slope shows the portion of the tunnel in which heavy clay was encountered.
The cycle time for the trains is markedly dierent. We can clearly see
the portion of the tunnel in which progress was slowed due to dicult soil
conditions. Moreover, the cycle time in the two dierent types of soil is fairly
constant. Again, this is because the travel time of the trains is no longer a
limiting factor in our model.
1
Because of the way excavation duration is modelled, our model is now stochastic. This
means that if you develop the model yourself you may obtain slightly dierent results.
164 CHAPTER 5. GENERAL PURPOSE MODELLING
20
0
0 20,000 40,000 60,000 80,000
Simulation Time (min)
Figure 5.18: Time Trains Wait for TBM vs. Simulation Time
Production (m)
1,000
500
0
0 20,000 40,000 60,000 80,000
Simulation Time (min)
200
100
0
0 20,000 40,000 60,000 80,000
Simulation Time (min)
InitialState (input): Indicates the state of the Valve at the start of simu-
lation: opened or closed.
BlockExtension." This Valve element has an initial state of Closed" and its
AutoClose property is set to 1. Thus, the track extension entity will remain at
the Valve until it is opened. When this nally happens, the entity proceeds
to a Capture element (causing the Valve to close automatically) where it
requests the track resource with a priority of 1 rather than the default of
0. It does this so that trains will be blocked from entering the tunnel while
track extension is taking place. Once it is granted the track resource, it
proceeds to another Capture element where it requests the TBM resource.
Once it is granted the TBM resource, it proceeds to release the TBM resource
immediately. The reason the entity requests the TBM resource is to ensure
that the TBM is not engaged with lining or resetting while track extension is
taking place; however, because the TBM is not actually required to perform
track extension, it is released immediately and will be idle during the process.
Next, the entity moves to a Task element labelled ExtendTrack" that models
the track extension activity. This is an unconstrained Task with a duration of
240 minutes (4 hours). Once track extension is complete, the track resource
is released (allowing trains to once again enter the tunnel) and the entity
completes its cycle by returning to the Valve labelled BlockExtension" where
it will wait until the Valve is opened again.
The cycle for the survey entity is the same, except this time, the request
for the track resource is made with a priority of 2 to ensure that surveying
takes precedence over track extension. In addition, the duration of the survey
activity is 480 minutes (8 hours).
It remains to discuss how the Valves that block the track extension and
survey entities are opened. Notice that four elements have been inserted into
the model following the Release element labelled ReleaseTBM", two Con-
ditionalBranch elements and two Activator elements. When a train begins
its return journey and enters the rst of these ConditionalBranch elements
(labelled CheckExtension"), a check is made to see if the value of the pro-
duction counter is a multiple of 6. If so, the entity will ow to the Activa-
tor labelled ActivateExtension" causing the BlockExtension" Valve to be
opened, and from there, proceeds to the next ConditionalBranch element;
if the production counter is not a multiple of 6, it simply proceeds directly
to the next ConditionalBranch element skipping the Activator. The formula
used to make this check is:
Return Count (" Chainage ") Mod 6 = 0
5.4. ADDING USER WRITTEN CODE TO MODELS 169
This formula uses the Visual Basic Mod operator to determine if the current
chainage is a multiple of 6. If so, the boolean value True is returned; otherwise
False is returned. The check whether surveying needs to take place works in
the same way, except this time the formula is:
Return Count (" Chainage ") Mod 90 = 0
To illustrate the use of the Execute element, we will redevelop the tunnelling
example from the previous sections using Execute elements almost exclu-
sively.
Note that throughout this section we assume some familiarity with the
Visual Basic programming language. For those readers who are unfamiliar
with Visual Basic, and for those who would like to review it, we provide a
short introduction to Visual Basic in Appendix B.
2. It provides a statistic that tracks production over time (see for example,
Figures 5.14 and 5.19 above), and
5.4. ADDING USER WRITTEN CODE TO MODELS 171
The Execute element with which we replace the Counter will need to perform
all three.
travel activity is dependent upon the counter we've just removed. In fact,
there are ve elements containing formulas that are dependent upon that
counter: the travel, excavate and return activities, together with the two
ConditionalBranch elements. In order to make our model run properly, we
need to modify the formulas of all ve elements so that they reference the
GX(0) global property instead of the counter. For example, the formula for
the duration of the travel activity needs to be changed from:
Return Count (" Chainage ") / 70.0
to:
Return GX (0) / 70.0
Once the formulas are updated, our model will run and produce the same
results as it did previously.
Return True
2. the connection point to which the entity should be routed when the
event is processed, and
3. the amount of time that needs to pass before the event should be pro-
cessed.
5.4. ADDING USER WRITTEN CODE TO MODELS 175
As an example, here is the formula for the Execute element that replaces the
travel activity of our model:
ScheduleEvent ( Element . OutputPoint , GX (0) / 70.0)
Return False
The rst line of this formula is the call to the ScheduleEvent method. In
this example, the method only has two arguments, so it is assumed that the
entity being scheduled is the entity owing through the Execute element.
The rst of the two arguments species the connection point to which the
entity should be routed when the event is processed. In this case, it is the
output point of the modelling element associated with the formula (i.e., the
output point of the Execute element). This means that when the event is
processed, the entity will ow out of the Execute element. The second of
the two arguments is the amount of time that needs to pass before the event
should be processed (i.e., the duration of the activity). This is the same
calculation as was used for the duration of the Task element. Finally, notice
that the value False is returned. This is important, because we don't want
the Execute element to allow the entity to pass on after the formula has
been executed; rather, we want the entity to be delayed until the event is
processed.
All of the other Task elements are converted to Execute elements in the
same way. The only real exception is the excavation activity. Its formula
looks like this:
Select Case GX (0)
End Select
Return False
176 CHAPTER 5. GENERAL PURPOSE MODELLING
1. the entity making the request (if omitted, the entity passing though
the Execute element is assumed),
4. the connection point to which the entity should be routed when the
requested servers are granted,
5. the name of the le in which the entity will wait, and
To illustrate the use of this method, here is the formula for the Execute
element that replaces the Capture element labelled CaptureTrack" in our
model:
RequestResource (" Track " , 1 , Element . OutputPoint , " TrackQ ")
Return False
The rst line in the formula is the call to RequestResource. The rst argu-
ment of the method is omitted, so it will be assumed that the entity passing
through the Execute element will be making the request. The second argu-
ment species the Resource element named Track" as being requested. The
third species that only one server is required. The fourth argument species
the connection point to which the entity should be routed when the track
resource is granted. In this case it is the output point of the Execute element,
so when the resource is captured, the entity will ow out of the Execute el-
ement. The fth argument species the File element TrackQ" as the place
in which the entity will wait if the resource is unavailable. The nal (sixth)
argument is omitted, so the request is assumed to have a priority of zero.
5.4. ADDING USER WRITTEN CODE TO MODELS 177
1. the entity releasing the resource (if omitted, the entity passing though
the Execute element is assumed),
To illustrate the use of this method, here is the formula for the Execute
element that replaces the Release element labelled ReleaseTrack" in our
model:
ReleaseResource (" Track " , 1)
Return True
The rst line in this formula is the call to ReleaseResource. The rst argu-
ment of the method is omitted, so it will be assumed that the entity passing
through the Execute element will be releasing the resource. The second argu-
ment species the Resource element named Track" as being released, while
the third species that one server is being released. Finally, the value True
(rather than the value False) is returned, which causes the Execute element
to route the entity out of its output point.
All of the other Release elements are converted in a similar manner.
This completes our conversion of the elements in our model to Execute ele-
ments. The remaining elements would be considerably more dicult (though
not impossible!) to convert, so we will not include them here. The converted
model is shown in Figure 5.25.
Method Arguments
SampleBeta Shape1, Shape2, Low, High
SampleExponential Mean
SampleGamma Shape, Scale
SampleLogNormal Location, Shape
SampleNormal Mean, Standard Deviation
SampleTriangular Low, High, Mode
SampleUniform Low, High
Chapter 6
Continuous Simulation
In this chapter we discuss combined models whereby continuous variables
that change continuously over time are required to be part of a discrete
event model such as the ones we have discussed so far in this book.
The need for such combined models often arises
(Klingener, 1996).
Imagine a model where we are modelling an excavation process but where
dewatering is required, for example. The excavation is a discrete event model
while dewatering is continuous. Likewise, if we are modelling a tunnelling
process in which the TBM advancement was continuous through a segment of
ground as opposed to a discrete time interval for a given meter of length. The
rest of the model is continuous while the TBM advance through the ground
is a function of the penetration rate of the TBM, and therefore the length
excavated is determined from a rate function rather than given as a xed
variable. There are numerous other examples, including those where ow of
water or other material (e.g., agilia concrete, asphalt, etc.) or conveyor belts
combine with the typical models we have seen so far.
181
182 CHAPTER 6. CONTINUOUS SIMULATION
Conservation of Energy
In this system, like any other natural system, the law of conservation of
energy holds true; i.e., potential energy in the system is transformed into
kinetic energy as time advances:
1 2
mv = mgh. (6.2)
2
By canceling m from both sides of Equation 6.2, and solving for v we obtain:
p
v = 2gh. (6.3)
Continuity of Flow
If we conceptualize the water tank as a large diameter pipe in a vertical
position (i.e., ow is vertical), then the law that governs physical ow of
uids holds true, i.e., continuity of ow. This law can be applied to the
transition of ow from a big diameter pipe (the tank) to a small diameter
pipe (the outow pipe):
A1 v1 = A2 v2 , (6.4)
where A1 is the cross-sectional area of the tank, A2 is the cross-sectional area
of the outow pipe, v1 is the draw down velocity of the water in the tank,
and v2 is the velocity at which water ows through the outow pipe.
The velocity at which the water in the tank is drawn down can be equated
to the rate at which the height of water in the tank changes. This can be
expressed as:
dh
v1 = − . (6.5)
dt
Now the parameter v in Equation 6.3 is the same as parameter v2 in Equa-
tion 6.4, so substituting for v1 and v2 in Equation 6.4 from Equations 6.3
and 6.5, we obtain:
dh p
− A1 = A2 2gh, (6.6)
dt
184 CHAPTER 6. CONTINUOUS SIMULATION
dh A2 p
=− 2gh. (6.7)
dt A1
The constants on the right hand side of Equation 6.7 (A1 , A2 , 2 and g ) can
be separated from the variable (h) to obtain:
dh √
= −k h. (6.8)
dt
For illustration, if the cross-sectional area of the tank is 1 m2 and that of
the orice is 5 cm2 (0.0005 m2 ), and g (the acceleration due to gravity) is
9.81 m/s2 , then we have:
dh 0.0005 √ √
=− 2 × 9.81 × h ≈ −0.0022147 h. (6.9)
dt 1
Equation 6.9 models the rate at which the height of the water in the tank
will drop over time.
1 dh
√ = −k. (6.10)
h dt
Next, we integrate both sides with respect to t:
Z Z
1 dh
√ dt = − k dt. (6.11)
h dt
The left-hand side of this equation can be simplied via the theorem of
Integration by Substitution to give:
Z Z
dh
√ = − k dt. (6.12)
h
6.1. DIFFERENTIAL EQUATIONS 185
We leave it as an exercise for the reader to verify that this solution for h
satises Equation
√ 6.8. Finally, we know that h = 10 when t = 0, so we must
have c = 2 10, and previously (in Equation 6.9), we had determined that
k ≈ 0.0022147, so:
√ !2
0.0022147 2 10
h≈ − t+ ≈ (−0.0011074t + 3.1623)2 . (6.15)
2 2
The problem of water draining from a tank described above is then re-
duced to the state variable being represented by a Stock element and the
rate shown in Equation 6.9 embedded in the Flow element. Simphony will
then model the value as time progresses using the Runge-Kutta method (nu-
merically solving Equation 6.9).
6.2.1 Stock
A Stock element represents a state variable in your model. You can have
as many Stock elements as you wish. Both the input and output point of a
Stock element may only be connected to a Flow element. Each Stock element
has the following properties:
InitialValue (input): The value of the Stock element at the time simula-
tion commences.
Value (output): The current value of the Stock. This value is maintained
by the simulation environment and changes continuously during sim-
ulation. It may also be changed directly by another element in the
modelan Execute element, for example.
In addition to these properties, a Stock element also has a statistic named:
Statistic: An intrinsic (time-dependant) statistic that tracks the value of
the state variable over time.
6.2.2 Source
A Source element represents a source of ow from outside your model. It is
assumed that the capacity of the source is unlimited. A Source element may
only be connected to a Flow element and has no inputs and no statistics. If
you're interested in keeping track of the amount of ow that is coming into
your model, you should use a Stock element with an unconnected input point
rather than this element.
6.2. CONTINUOUS MODELLING ELEMENTS 187
6.2.3 Sink
A Sink element represents a destination for ow to somewhere outside your
model. It is assumed that the capacity of the sink is unlimited. A Sink
element may only be connected to a Flow element and has no inputs and no
statistics. If you're interested in keeping track of the amount of ow that
is leaving your model, you should use a Stock element with an unconnected
output point rather than this element.
6.2.4 Flow
A Flow element represents a rate of ow into or out of a stock. The input
point of a Flow element can only be connected to a Stock or a Source and
the output point may only be connected to a Stock or a Sink. Each Flow
element has the following properties:
6.2.5 Watch
A Watch element is responsible for observing a Stock element and generating
an entity if and when a state event occurs. Unlike the previous elements
that have been discussed, the Watch element is intended to be part of your
discrete event model. It is the element that permits communication from the
continuous part of a model to the discrete part. Each Watch element has the
following properties:
Stock (input): The name of the Stock element the Watch element is to
observe.
188 CHAPTER 6. CONTINUOUS SIMULATION
Stock
Value
Negative state
transition
Tolerance
Threshold
Tolerance
Time
10
Water Level (m)
8
0
0 500 1,000 1,500 2,000 2,500
Simulation Time (seconds)
will consider the concentration to have equalized when the rst tank contains
74.99 pounds of chemical.
From the ows specied between the two tanks we obtain the equations:
1 1
y10 (t) = y2 (t) − y1 (t), (6.17)
50 50
and
1 1
y20 (t) = y1 (t) − y2 (t). (6.18)
50 50
We also know that the total amount of chemical in the system is constant,
so we must have:
y2 (t) = 150 − y1 (t). (6.19)
Substituting Equation 6.19 into Equation 6.17 gives:
1
y10 (t) + y1 (t) − 3 = 0, (6.20)
25
6.4. EXAMPLE: CHEMICAL TANKS 193
It's now a simple matter to determine that y1(t) = 74.99, when t ≈ 223.07
minutes.
The two Stock elements in this model are labelled Tank #1 and Tank
#2 and the value of each represents the amount of chemical in the corre-
sponding tank. The InitialValue property of Tank #1 is set to 0, and the
InitialValue of Tank #2 is set to 150. The RecordingInterval property of
both is left at the default value of 1.
The single Watch element is responsible for detecting when the concen-
tration of chemical in the rst tank reaches 74.99 pounds. Its Stock property
is set to Tank #1 and its Threshold property is set to 74.99. Since the
concentration in the rst tank is expected to rise during simulation, the Di-
rection property of the Watch is set to Positive. Finally, as we do not expect
the rst tank to ever reach a concentration of 75 pounds (except in the limit),
the tolerance is set to 0.01.
When the Watch element generates an entity, it is routed to a Counter
element. This Counter has its Limit property set to 1thus, causing simu-
lation to terminate as soon as the Watch element generates an entity.
The Rate property of the Flow element labelled Flow #1 is set to the
following code:
Return GetStockValue (" Tank #1") / 50
The rate of chemical ow out of the rst tank is, of course, dependent on
the concentration in the tank, so this code looks up the current value of the
Tank #1 Stock element. The rate code for the Flow element labelled Flow
#2 is similar:
Return GetStockValue (" Tank #2") / 50
When executed, we can see that the model terminates after 224 minutes
by examining the Time property of the Counter element. This agrees well
with the value calculated analytically. To view a graph showing the change
in chemical concentration in the rst tank over time, right-click on the Stock
element labelled Tank #1, and select the View Statistic menu item. Switch
to the Time Chart tab to see the graph. A similar graph can be viewed from
the Stock element labelled Tank #2. In this case, it can be seen that the
concentration in the tank decreases over time.
Over the rst 180 days, 30 houses are occupied (1 every 6 days with
the rst on day 6).
Over the next 180 days, 60 houses are occupied (1 every 3 days with
the rst on day 183).
Over the next 180 days, 72 houses are occupied (1 every 2 21 days with
the rst on day 362 12 ).
Over the nal 180 days, the remaining 36 houses are occupied (1 every
5 days with the rst on day 545).
196 CHAPTER 6. CONTINUOUS SIMULATION
1. Based on the rates projected, forecast the cost associated with hauling
the sewage as the development progresses over a 3 year (1, 080 day)
period.
2. Determine if there is a point in time where the new home sales have to
stop since the sewer handling has reached its limited.
6.5. EXAMPLE: SANITARY SEWER HANDLING 197
The continuous portion of the model will be responsible for two things:
1. the ow of sewage from the houses to the transfer tank, and
The model will make use of two global variables to control the ow of sewage
in the continuous portion. The GX(0) variable will be used to track the
number of houses sold; thus the rate of sewage ow into the transfer tank
will be given by GX(0) × 1.2 m3 /day. The GX(1) variable will be used to
control the ow of sewage from the transfer tank to a sewage hauling truck.
When a truck is being loaded this variable will be set to a value of 1, and
when a truck is not present it will be set to 0; thus the rate of sewage ow
out of the transfer tank will be given by GX(1) × 0.9 m3 /min. The ow of
sewage is illustrated in Figure 6.7.
Transfer
Houses Truck
Tank
GX(0) × 1.2 m³/day GX(1) × 0.9 m³/min
need to convert all of the rates and durations expressed in minutes to days
by either multiplying or dividing by the factor 24 × 60 = 1, 440. Table 6.1
summarizes the results of these calculations. The model will be congured
to terminate at the 1, 080 day mark by setting the MaxTime property of the
scenario to 1, 080.
Finally, since several of the durations specied in the model are stochastic,
the model should be run multiple times to obtain a range of results. In our
example, we will set the RunCount property of the scenario to 30.
Return 3
Case Is < 540
Return 2.5
Case Else
Return 5
End Select
This code ensures that houses are created as specied in the problem state-
ment. For example, if the current simulation time is less than 180 days, the
time between creation will be 6 days. Thus a total of 180 ÷ 6 = 30 houses
200 CHAPTER 6. CONTINUOUS SIMULATION
will be created during that time. Similarly, if the current simulation time is
bigger than (or equal to) 180 days and less than 360 days, the time between
creation will be 3 days. So a total of (360 − 180) ÷ 3 = 60 houses will be
created during that time.
Once a house entity has been created, it passes through an Execute ele-
ment labelled Add House and is then destroyed. The code dened for the
Execute element is:
GX (0) = GX (0) + 1
CollectStatistic (" Houses " , GX (0))
Return True
This code rst increments the global variable tracking the total number of
houses, and then collects the new value into a Statistic element labelled
Houses. The Interpretation property of the Houses statistic is set to
ContinuousVariable, and it can produce a time chart describing the total
number of houses over time.
Next, the discrete event part of the model is required to model the travel
of the sewage hauling trucks. The four trucks are modelled as entities and
are created at the start of simulation by a Create element. They are then
immediately routed to a Capture element where they attempt to capture the
loading area. Once the loading area has been successfully captured, the truck
entity passes through an unconstrained task labelled Prepare to Load that
models the maneuvering of the truck into the loading area.
After the truck is in position to be loaded, it enters a Valve element
labelled Block. Initially, the state of this Valve element is closed. As
simulation progresses, the state (open vs. closed) of this valve is controlled
by the continuous portion of the model: when there is sucient sewage in
the transfer tank to ll a truck, the valve will be open, and when the level
of sewage in the tank is less than the capacity of a truck, it will be closed.
By operating in this way, the valve ensures that a truck cannot begin to load
until there is sucient sewage in the tank to ll it.
Once the truck has been permitted to load, it enters an Execute element
labelled Start Loading that contains the code:
GX (1) = 1.0
Return True
This code starts the ow of sewage into the truck by setting the GX(1) global
variable to the value 1. Next, the truck entity enters a Valve element labelled
Load that has its InitialState property set to Closed and its AutoClose
6.5. EXAMPLE: SANITARY SEWER HANDLING 201
The rst thing this code does, is read the current value of the Stock element
labelled Truck (located in the continuous part of the model) and assigns
the value to the LX(1) attribute of the entity. We do this so we can record
the correct dumping cost when the truck is unloaded at the processing plant.
Next, the value of the Truck stock is reset to zero in preparation for loading
the next truck. Finally, the GX(1) global variable is set to the value 0, causing
the ow of sewage to the truck to cease. The truck then proceeds to an
unconstrained task labelled Prepare to Haul that models the maneuvering
of the truck out of the loading area. After this, the truck releases the loading
area, which becomes available to other trucks.
At this point the truck begins its journey to the processing plant, which
is modelled by an unconstrained task labelled Haul. Once at the processing
plant, the truck enters a Capture element wherein it attempts to capture one
of the dump bays. After a dump bay is obtained, the truck passes through
unconstrained tasks that model maneuvering in preparation to unload, the
unloading process itself, and maneuvering out of the dump bay. Thereafter,
the truck releases the dump bay via a Release element.
Next, the truck passes through a conditional branch that lters out the
additional trac to the processing plant (discussed in a moment) and then
passes through a CostCollect element that records the cost of dumping the
sewage. The unit cost for this is $13.50 and the quantity is given by the code:
Return LX (1)
The truck then passes through an unconstrained task modelling its return to
the loading area, a CostCollect element that registers the $190.00 in trucking
costs, and nally begins its cycle anew.
2
This is an example of a common strategy for modelling continuous activities. For
more information see Section 6.7.1
202 CHAPTER 6. CONTINUOUS SIMULATION
the Transfer Tank stock, and watching for the threshold 9.5 m3 to be passed
in a positive direction within a tolerance of 0.5 m3 . When this happens, an
entity passes through an Activator element that will open the Valve labelled
Block in the discrete portion of the model, thus, permitting trucks to load.
The third Watch element is responsible for detecting when the level of
sewage in the transfer tank is too low for trucks to load. It is observing the
Transfer Tank stock, and watching for the threshold 9.5 m3 to be passed
in a negative direction within a tolerance of 0.5 m3 . When this happens, an
entity passes through an Activator element that will close the Valve labelled
Block in the discrete portion of the model, thus, preventing trucks from
loading.
The nal Watch element is responsible for detecting if the transfer tank
overows into the holding pond. It is observing the Transfer Tank stock,
204 CHAPTER 6. CONTINUOUS SIMULATION
and watching for the threshold 57.0 m3 (95% of the tank's capacity) to be
passed in a positive direction within a tolerance of 3.0 m3 . When this hap-
pens, an entity will pass through a Counter element, which registers the fact
that an overow occurred. The simulation time at which the tank rst over-
owed can therefore be obtained from the FirstTime statistic of the Counter.
6.5.4 Results
The costs reported by the model (summarized across all 30 runs) are shown
in Figure 6.10. As the standard deviation reported is quite small, we can
safely assume that the costs will be approximately $5.88M.
To determine if (and when) the proposed sewage handling strategy reaches
its limit, we need to examine the FirstTime statistic of the Overow
Counter element. This statistic tracks the time at which an entity rst passed
through the counter. For a single run, this statistic will only ever contain
one observation (or zero observations if no entities passed the counter); how-
ever, when summarized across all runs, the statistic can report the average
time at which an entity was rst seen, the earliest time, the latest time, and
so forth. It can also generate a histogram or cumulative distribution chart.
The average time reported by the counter in our model is approximately 686
days, with an earliest time of approximately 575 days and a latest time of
approximately 813 days. The cumulative distribution chart is shown in 6.11.
Clearly, the proposed strategy will not be able to keep up during the nal 18
months of the project.
Cost Report
Date: Thursday, June 18, 2015
Project: Model
Scenario: Truck Hauling
Run: All Runs
80 %
60 %
40 %
20 %
0%
500 550 600 650 700 750 800 850
Simulation Time (days)
accelerating construction is about $6,000 per day, would it make sense for
him to take the penalty or accelerate construction?
TBM,
6.6. EXAMPLE: TUNNEL CONSTRUCTION 207
rocks,
surveying.
With the exception of surveying, all other delay types cannot be sched-
uled or anticipated beforehand. In other words, more realistic and precise
information about delays in this category can only be obtained when the
project has commenced. However, this is not a problem because most tun-
nelling projects take a long time to complete; hence, it would still be possible
to perform meaningful simulation-based delay analysis using data from the
site, which would benet the project.
Details of delays, also referred to here as delay information, that need to
be established on a project-by-project basis include: verication of whether
a given delay exists, its interarrival times, and its duration. There is a sci-
entically proven systematic method for obtained this information: Method
Productivity Delay Modelling (MPDM). MPDM is a technique that was
proposed by Adrian and Boyer (1976) for measurement, prediction and im-
provement of a project's productivity in relation to the amount of delay
experienced. MPDM was applied to a real TBM tunnelling project to obtain
information similar to the above.
A description of the project is presented along with related process details
and the delay information obtained for it. Thereafter, a combined discrete-
continuous simulation model for the tunnelling process is presented with
these delays embedded within it. Results of the experimentation done with
the simulation model to investigate the inuence of delay dynamics on the
tunnelling process are then presented and discussed.
The TBM tunnel studied, the SA1A project, is one segment of a larger
municipal project, the South Edmonton Sanitary Sewer (SESS) overall strat-
egy, connecting the SW1 pump station at Ellerslie Road and Parsons Road
208 CHAPTER 6. CONTINUOUS SIMULATION
to Stages SA1b&c. The alignment runs north from the intersection of Par-
sons Road and Ellerslie Road, then turns northeast before nally crossing
the Anthony Henday as well as 91 St NW. See Figure 6.12.
This described alignment crosses the transportation and utility corridor
that a number of existing pipelines including a Nova Chem 273 mm HVP
line and an ATCO Gas 508 mm line. The tunnel section of interest along this
alignment is approximately 706 m and is to be constructed using an M100
TBM (M17). The project timeline was approximately one year. Details of
this tunnel and activities based on the way the construction process were set
up on site are summarized in Table 6.2.
Parameter Value
Tunnel length 700 m
Non-delayed production tunnelling rate 0.45 m/hr
Train travel to TBM 4 km/hr
Train return 3.5 km/hr
Unload liners 15 minutes
Unload spoil 15 minutes
Load new liners 6 minutes
Install liners 24 minutes
Reset TBM 15 minutes
Surveying done every 90 m 8 hours
Track extension every 6 m 4 hours
Miscellaneous
TBM Water
Surveying
Hydraulic
Electrical
As-Builts
and PVC
Cleaning
Weather
Systems
Systems
Systems
Delays
Crane
Rocks
Voids
TBM
TBM
TBM
Average 3.85 4.84 3.63 4.50 5.33 0.00 3.83 4.00 3.77 7.31 2.88
6.6. EXAMPLE: TUNNEL CONSTRUCTION 211
Table 6.4: Statistical Distributions for Modelling the Dierent Delay Types
Delay Type Time Between Delay (hrs) Delay Duration (hrs)
TBM Exponential(117) Exponential(3.85)
TBM Hydraulic Systems Exponential(90.41) Exponential(4.84)
TBM Electrical Systems Exponential(231.13) Exponential(3.63)
TBM Water Systems Exponential(415.50) Exponential(4.50)
Cleaning TBM Exponential(341.50) Exponential(5.33)
Surveying Exponential(341.50) Exponential(3.83)
Weather/Crane Exponential(424.50) Exponential(4.00)
Rocks Exponential(88.18) Exponential(3.77)
Voids and PVC As-builts Exponential(58.29) Exponential(7.31)
Miscellaneous Delays Exponential(419.75) Exponential(2.88)
local and global attributes of the model. They have been summarized in
Table 6.5.
The global attributes GX(10) through to GX(17) were all initialized to
1.0 prior to the start of simulation by embedding user-written code within
the Initialize property of the scenario. This had the eect of ignoring delay
impacts on TBM excavation until these delays actually occurred.
track resource becomes available. The rst train entity is routed into a
ConditionalBranch element labelled GX(4) = 1.0?. The condition for this
element is evaluated to determine whether the full tunnel length has been
excavated. In case the full length has been excavated, the train entity is
routed out the True branch into a Release element, labelled Release Track,
causing it to release the train track resource, after which it is destroyed.
Otherwise, the train entity is routed out the False branch into a Valve
element labelled Halt Entity for Continuous Travel where it is halted. On
transfer into this Valve element, the train entity triggers the evaluation of
the formula in the incoming trace property activating the model that mimics
the continuous travel of the train from working shaft to tunnel face. At
the end of the continuous travel to the tunnel face, the Valve is opened by
an Activator releasing the train entity, which ows into a Capture TBM
element. As the entity leaves the Valve, it causes it to close as a result of its
auto close property being set to 1.0. The train entity requests the TBM
resource when routed into the Capture TBM element. If the TBM is busy,
the train entity will be queued in the File labelled TBMQ. When the TBM
resource becomes available, the train entity is routed out of the Capture
TBM element into the Generate element where it gets cloned.
This cloning process emulates unloading of liners taking place concur-
rently with TBM excavation. The original entity that possesses the captured
TBM, i.e., the train entity, is routed out through the top branch of the Gen-
erate element into a valve labelled Halt Entity until Continuous Excavation
is Complete where it is halted.
The clone entity that represents liners, which need to be ooaded from
the train is routed out through the bottom branch into a Task element la-
belled UnloadLiners where it is delayed for 15 minutes before proceeding
to another Generate element labelled Generate2. At this element, another
clone represents TBM lining and resetting tasks. The original entity that
represents the unloading of liners is then routed into a Consolidate element
labelled Consolidate and is halted there if there is no entity waiting at the
top branch. The cloned entity that represents the TBM lining and resetting
tasks is routed out the bottom branch of the Generate2 element into the
Capture TBM2 element where it makes a request of the TBM resource. If
the TBM is still engaged in the continuous excavation task, i.e., assigned to
the train entity that is halted at the Halt Entity until Continuous Excava-
tion is Complete, the TBM lining and resetting clone entity will be queued
in the File labelled TBMQ.
216 CHAPTER 6. CONTINUOUS SIMULATION
When the train entity is transferred into the Halt Entity until Continuous
Excavation is Complete valve, it evaluates the formula within its incoming
trace activating the simulation of 1 m TBM advancement in a continuous
fashion. At the end of the continuous excavation cycle, a state event is
triggered, which causes an activator to open this valve. This results in the
release of the train entity, which prompts the valve to close. This entity is
then routed into a Release TBM element where it releases the TBM resource
before it is transferred into the top branch of the Consolidate element.
The release of the TBM resource and routing of the train entity into the
`Consolidate element concurrently trigger two events. The rst is related to
the TBM lining and resetting clone entity being granted the TBM resource.
The other involves consolidation of the train entity and the clone entity that
represents the unloading of liners. The consolidated train entity is then
routed into a Valve element labelled Halt Entity for Continuous Return
where it triggers the simulation of the train movement back to the working
shaft using a continuous approach.
This ag is used to indicate to the trains at the working shaft that TBM
excavation is complete, hence, there is no need to travel to the tunnel face.
When this point is reached, each train that is granted the train track resource
is routed out the true branch of the GX(4) = 1.0? ConditionalBranch
element, into a Release element where it releases the train track resource and
then gets destroyed by being routed into the Destroy element, i.e., Destroy6.
Destroying all train entities does not mark the end of simulation events.
This is because the delay entities keep cycling in their respective sub-models
resulting in continuous scheduling and processing of events. Consequently,
the simulation model was setup to terminate when the last loaded train from
the TBM returns to the working shaft, i.e., the last returning train entity is
routed into the Counter modelling element labelled Terminate Simulation.
A value of one is set for the Limit property for this modelling element to
achieve this simulation termination eect.
In the continuous models, the state variables associated with the above
enumerated processes are modelled using Stock modelling elements. The
rate of change of these stock variables are modelled by connecting Flow
elements to the appropriate stocks. Communication between the discrete
event models and the continuous models are achieved using Watch elements,
global attributes (as switches), Valves and Activators. Detailed explanations
of each model layout used to simulate processes continuously are presented
next.
In the discrete case, the time required for the TBM to advance 1 m
was computed as a quotient of length advanced, i.e., 1 m and the TBM
penetration rate. These values were used to schedule simulation events that
emulated each excavation cycle for the TBM. In the continuous case, the
TBM penetration rate is used to derive the distance that the TBM advances
218 CHAPTER 6. CONTINUOUS SIMULATION
The other switches, i.e., other than GX(0), relate to the occurrence of
interruptions that result in a delay or delays of the TBM excavation process.
The Stock labelled Excavated Tunnel Length maps to a state variable that
represents the distance that the TBM has advanced in each excavation cycle.
The tunnelling process is setup such that the TBM advances 1 m in each
cycle. Given that the Excavated Tunnel Length stock valve is cumulated
each time this continuous model is activated, a method to notify the simula-
tion engine that the 1 m advancement has been achieved was needed so that
a state event can be raised at the right point in time. In order to achieve
this, a separate target variable, i.e., GX(1) attribute, was designated as the
threshold. This threshold was recursively set 1 m ahead of the Excavated
Tunnel Length stock valve at the start of each excavation cycle by embed-
ding the following formula within the incoming trace property of the Valve
element labelled Halt Entity until Continuous Excavation is Complete.
GX (1) = GetStockValue (" Excavated Tunnel Length ") + 1.0
entity. This entity ows into a Valve Activator (labelled Release Train Entity
after Excavation Cycle) and nally into a Destroy element.
The transfer of the entity into the Release Train Entity after Excava-
tion Cycle activator has the eect of opening a Valve in the discrete event
model labelled Halt Entity until Continuous Excavation is Complete, hence
releasing the train entity that it was retaining. As the train entity is being
transferred out of this Valve, it triggers the evaluation of the formula within
its outgoing trace property. This causes the de-activation of the continuous
excavation model (i.e., sets GX(0) to zero) and increment of the threshold
value (i.e., GX(1)) by 1 m. This formula was presented previously when
discussing the threshold for the continuous excavation process.
The activation and de-activation of this continuous excavation model is
repeated in the course of the simulation until the total excavated length of the
tunnel, i.e., value for the Stock labelled Excavated Tunnel Length, crosses
a threshold of 706 m from below within a tolerance of 0.001 m. At the start
of simulation, the value of this stock variable was set to 4.0 m.
The Watch element labelled Tunnel Excavation Watch (706 m) is con-
gured to look out for the state event related to this. When this state event
is raised, the Tunnel Excavation Watch (706 m) Watch creates an entity
that ows through an Execute element and ends up in a Destroy element.
When in, this entity is transferred into the Execute element labelled GX(4)
= 1.0, it sets the GX(4) attribute to 1.0. This ag is used to indicate to the
trains at the working shaft that TBM excavation is complete, so there is no
need to travel to the tunnel face.
simulation, train speed is used to derive the distance travelled by the train.
Therefore, train travel speeds are modelled as ows while the distance/loca-
tion is modelled as a Stock. The approach used involves cumulating the stock
variable as the train travels from the working shaft to the tunnel face and
depleting this same stock variable as the train returns from the tunnel face
to the working shaft. The initial value of the Train Location Stock was set
to zero, positioning the train at the working shaft at the start of simulation.
In the discrete event part of the model for this example (see Figure 6.13),
trains are modelled as the main virtual entities. The continuous travel of
trains from the working shaft to the tunnel face was activated as soon as
a train entity was granted the train track resource. A train entity granted
a train track resource would be transferred into a Valve modelling element
labelled Halt Entity for Continuous Travel in Figure 6.13. The transfer of
a train entity into this Valve evaluates a formula in which the GX(2) switch
is set to a value of one. This has an eect of activating the Flow element
labelled Train Travel Rate to TBM resulting in the continuous travel of
the train to the tunnel face. The following formula is embedded within the
Train Travel Rate to TBM to evaluate the rate of travel of the train to the
tunnel face.
Return GX (2) * 4000.0 / 60.0
The train entity is halted at this Valve until a state event related to the
arrival of the train at the tunnel face is raised. The state event related to the
6.6. EXAMPLE: TUNNEL CONSTRUCTION 221
train arriving at the tunnel face is raised when the value of the Stock labelled
Train Location crosses a location threshold from below within a tolerance
of 0.05. The length of the train track, i.e., GX(5) is used as a threshold for
the train arrival at tunnel face state event. The Watch modelling element
labelled Watch for Train Arrival at Tunnel Face looks out for this state
event and creates an entity in response to this event. This entity is routed
into an Activator modelling element labelled Opens Travel Valve, then to a
Trace modelling element and nally into a Destroy. As this new entity ows
through the Activator, it opens the valve labelled Halt Entity for Continuous
Return in the discrete event model part.
As the train entity is transferred out of the Halt Entity for Continuous
Travel Valve in the discrete event model portion, it evaluates a formula in
the outgoing trace property that sets the GX(2) switch to a value of zero.
This has the eect of deactivating the Flow labelled Train Travel Rate to
TBM, halting the simulation of continuous travel (i.e., the accumulation of
Train Location Stock value) to the tunnel face.
This train entity then ows to subsequent modelling elements that trigger
the ooading of liners, excavation, and lining of the next 1 m tunnel section.
After all these three tasks are completed in the discrete event model part,
the train entity is transferred into another valve labelled Halt Entity for
Continuous Return where it is halted until its continuous return to the
working shaft is completed. A similar design pattern is used to continuously
model the return of the train from the tunnel face. The attribute GX(3) is
used as the switch for activating or deactivating the Flow element labelled
Train Return Rate from TBM. The activation, i.e., setting GX(3) to a
value of one and the de-activation, i.e., setting the value of GX(3) to zero,
are done within the formula for the incoming and outgoing traces for the
Valve element labelled Halt Entity for Continuous Return. This activation
and de-activation of the return of the train from the tunnel face are possible
because of the following formula embedded within the Train Return Rate
from TBM Flow element that contains the GX(3) switch.
Return GX (3) * 3500.0 / 60.0
The simulation of the continuous return of the train from the TBM is
stopped when a state event that signies the arrival of the train at the working
shaft is raised. This state event is raised when the Train Location Stock
value crosses a threshold from above within a tolerance of 0.05. A threshold
of 0.05 and tolerance of 0.05 are conveniently chosen to model this state
222 CHAPTER 6. CONTINUOUS SIMULATION
event so that the depletion of the Train Location Stock value by the Train
Return Rate from TBM Flow is stopped before it drops below a value of
zero. Another new entity is created by the Watch modelling element labelled
Watch for Return from TBM. This entity ows into an Activator element
labelled Opens Return Valve then into a Trace element and nally into a
Destroy element. When the entity ows through the Opens Return Valve
Activator, it opens the Valve in the discrete event model part labelled Halt
Entity for Continuous Return triggering the release of the train entity. This
in turn triggers the onset of other discrete events but marks the end of the
processes simulated continuously in any given train cycle.
number of surveys done was tracked by the Counter element labelled Count
Surveys.
only if it was busy at the time that the delay is scheduled to take place. A
Survey delay" or Crane delay" entity leaving the Task that models delay
interarrivals was routed into a ConditionalBranch element (See the 6th and
7th sub-models in Figure 6.17). This ConditionalBranch element had a VB
code snippet written within it to determine whether the appropriate resource
was idle or busy at that point in time. If the resource was found to be busy,
the delay entity would be routed out through the top branch triggering the
preemption of the resource, delay for a specied duration, and subsequent
release of the resource before being routed back to the Task element that
schedules the next occurrence of that delay. Otherwise, if the resource was
idle, the delay entity would be routed out of the ConditionalBranch element
and back to the Task element that schedules the next occurrence of the
delay. The following code snippet was written within the ConditionalBranch
element of the Survey delay" sub-model and demonstrates how the check
and entity routing was implemented for that type of delay.
Dim SurveyingResource As Simphony . General . Resource = _
Scenario . GetElement ( Of Simphony . General . Resource ) _
(" Surveying Crew ")
the continuous excavation process. For example, when a delay due to the
TBM electrical system is encountered, the GX(13) attribute is set to zero
using the following formula.
GX (13) = 0.0
Return Nothing
This formula was embedded within the outgoing trace of the Preempt
modelling element labelled Preempt4. Setting the GX(13) switch to zero
de-activates the continuous TBM excavation process by setting the value of
the Flow element labelled TBM Excavation Rate to zero. The preempting
delay entity is routed into a Task element after being granted the resource
that it requires. This delay entity's LX(1) attribute is tagged with the time
that the delay commenced. The following formula was embedded within the
incoming trace property of this Task element to achieve this eect.
LX (1) = TimeNow
Return Nothing
This Task element emulates the time that a given delay persists. This
duration is modelled using an Exponential distribution (see Table 6.4). After
this time elapses, the preempting entity is routed into a Release element
where it releases the resource that it preempted. The following formula was
embedded within the outgoing trace of this Task element so that the duration
of the delay would be stored on the delay entity (i.e., on the LX(2) attribute)
for subsequent collection as a statistic.
LX (2) = TimeNow - LX (1)
Return Nothing
The delay entity is then routed into a Counter element where it regis-
ters the number of events realized for that delay type. Thereafter, the delay
entity is routed into a Generate modelling element where it is cloned. The
228 CHAPTER 6. CONTINUOUS SIMULATION
original delay entity is routed into a Collect statistic modelling element la-
belled Collect Delay Statistics. The LX(2) property of this original delay
entity is used in the collection of a delay duration observation. The follow-
ing formula was embedded within the value property of this Statistic collect
modelling element. These observations were sent to the Statistic element
labelled Delay Duration Statistics.
Return LX (2)
The cloned delay entity is then routed back to the rst Task element that
models the delay interarrival times and the cycle is repeated once again. This
cycle keeps on going until the simulation is explicitly terminated.
100 %
Cumulative Probability
80 %
60 %
40 %
20 %
0%
0.8 0.85 0.9 0.95 1 1.05 1.1
Simulation Time (Mins) ·105
chart was retrieved from the last count property of the counter modelling
element labelled Terminate Simulation shown in Figure 6.13. There was
no chart to present for the total project duration from the scenario without
delays because this parameter had a standard deviation of zero.
The time that it takes for delays to persist is collected as a statistic in
the simulation model. Train cycle times are another statistic collected in the
model. At the end of simulation, results of these statistics are retrieved for
their respective statistic modelling elements and presented in Figures 6.19
and 6.20, respectively.
In order to conrm the accuracy with which delays are modelled, a com-
parison is made between the numbers of delays that were realized on the con-
struction site and those obtained in the simulation model. Details of these
are summarized in Table 6.8. Simulated values are obtained from the mean
of the LastCount property of the appropriate Counter modelling element
that tracks the number of delays.
It is evident from Table 6.8 that the majority of delays are modelled
accurately with the exception of TBM delays, TBM Hydraulic systems, and
delays due to surveying. These were poorly estimated and thus did not
occur with the expected frequency based on actual construction data. The
most likely reason for this relates to the small sample size used to formulate
the representative distributions for modelling delay interarrival times and
durations.
50 %
Relative Frequency
40 %
30 %
20 %
10 %
0%
4.147 4.445 4.744 5.042 5.341 5.639 5.938
Train Cycle Time (Min) ·104
40 %
Relative Frequency
30 %
20 %
10 %
0%
127.7 153.4 179.1 204.8 230.5 256.2 281.9
Delay Persistence Duration (min)
Sensitivity Analysis
A sensitivity analysis was carried out to assess the eects that unanticipated
delays have on the total tunnelling project duration. To achieve this, each
unanticipated delay was removed from the simulation model. The delay
was removed by setting the Quantity property of the element that creates
the corresponding delay entity, to zero. In order to quantify the impact of
delays on cost, it was assumed each day is comprised of a 12-hour shift and
$20,000 is spent each day on tunnel construction. Results of this analysis are
summarized in Table 6.9. The maximum possible total project duration of
1872.40 hours (156.03 days) obtained from the simulated scenario in which
all delays were considered, is used as the basis for the computation of values
presented in Table 6.9.
which simply resets the value of the Stock element labelled Truck to 0,
and then set the rate of ow into that Stock element to 1.2. The entity
then proceeds to a Valve element labelled Block Truck. This Valve has its
InitialState property set to Closed and its AutoClose property set to 1, so
when the entity arrives it is forced to wait.
Now because of the call made to the SetFlowRate method, ow begins into
the Truck Stock element. As the Stock element lls, it is being monitored
by a Watch element while the entity is blocked by the valve. This element is
congured to trigger creation of an entity when the truck is lled to capacity;
i.e., its Direction property is set to Positive, its Threshold property is set to
slightly less than the capacity of the truck, and its Tolerance property is set
to the dierence between the truck's capacity and the Threshold property.
The real world is not static or deterministic. Many events are unpredictable,
and many processes appear to occur in a random manner. For example,
the cycle times of the trucks or shovels in an earthmoving operation vary
from cycle to cycle. They are rarely the same. Likewise, the service time
for loading a truck varies from load to load. In the real world there are
many variables that dictate the outcomes of such operations, which make
them appear to be random. For example, the truck cycle time may vary
because of the operator, the road conditions, other trac, change in weather,
unexpected mechanical problems, and so forth. While it would be great to
have the simulation model include as many factors that impact the cycle time
as possible, and as much detail of the process as possible in order to provide
more accurate estimates of each service time, or random event, in general,
it would be ill-advised to try to capture all these variables and include them
in the model. First, it may not be possible to collect the required input for
such variables to feed into the model, and second, the model would be very
large, expensive to build and dicult to manage.
The variability in the real world can be accounted for in a simulation
model, however. We use the concept of Monte Carlo Simulation to achieve
this. The Monte Carlo Method is a process that makes use of random num-
bers and the principles of statistical sampling to model random processes.
235
236 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
≈ 126.380 − 64.156
= 62.224.
Now let's try to solve the integral using Monte Carlo Simulation. We
construct the Simphony model shown in Figure 7.1. The rst thing to note
about this model is that the RunCount property of the scenario has been set
to 10,000 (see Figure 7.2). This means that rather than executing the model
just once, Simphony will execute it 10,000 times. The idea is to evaluate f
at a randomly selected point between 20 and 32 each time the model is run.
The rst line in this formula generates a random deviate from a uniform
distribution with a low of 20 and a high of 32, which is then stored in a
local variable named X. The second line in the formula evaluates f at this
point and returns the result, which is then added as an observation to the
statistic. Upon leaving the StatisticCollect, the entity is routed to a nal
Destroy element.
238 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
After the model is run, the mean value reported by the statistic is 5.187,
so the value of the denite integral should be approximately:
which agrees closely with the value calculated using the Fundamental Theo-
rem of Calculus.
Now suppose that the function f is not so easy to integrate, say:
Z 32 r
x+1
f (x) = dx.
20 ex/20
In this case, the Monte Carlo approaches become more attractive and even
necessary. To modify our model to perform this calculation, we need only
change the formula of the StatisticCollect element to:
Dim X As Double = SampleUniform (20 , 32)
Return System . Math . Sqrt (( X + 1) / System . Math . Exp ( X / 20))
This time, when the model is run, the mean value reported by the Statistic
is 2.701, so the value of the denite integral should be approximately:
6%
Relative Frequency
4%
2%
0%
25 30 35 40 45 50 55 60
Duration (Minutes)
Interarrival
Time
0 5 8 19 22 27 33 36 Time (days)
Arrivals
the most likely cost being $2,600/m. We can assume that unit cost is best
modelled with a triangular distribution with end points 2,500 and 2,900 and
mode of 2,600. The total length of the pipeline is 1,200 meters within 5
meters accuracy. We can model this with a uniform distribution with end
points of 1,195 and 1,205. During the simulation, we simply sample values
from the triangular distributions of unit cost and the uniform distribution of
length, multiply the two random samples to estimate the cost for that line
item for the given iteration. We then sum up all line item costs to get the
total for that iteration. When we collect all observations for all iterations we
have a distribution of the total project estimate.
The scenario is congured to run 100 times. The cost report produced after
the model has been simulated is shown in Figure 7.7.
7.3. RANGE ESTIMATING 243
2
1 3
4 5 6 7 8
1.0
Cumulative Probability
0.6
0.4
0.2
0.0
340 360 380 400 411 420 440
Project Duration (days)
for sampling durations between the lower limit of 10 and upper limit of 20
lies in generating a uniform random number on the range [0, 1] and then
transforming that number into the appropriate range and/or model of the
collected data. The transformation of random numbers into an appropriate
variate will be covered in the next section. In this section we discuss the
generation of uniform random numbers on the range [0, 1].
A true random number as dened by mathematicians is very dicult to
generate. In today's age of digital computers, simulators settle for a pseudo
random number. Such a number possesses similar attributes as a true random
number for functionality purposes, and from here on we will use the phrase
random number to mean a pseudo random number.
To get a random number on the range of [0, 1] one could throw dice, look
in a telephone book, draw numbered balls, use tables like the one produced
by the RAND corporation, or use numerical means. Numerical means are
adaptable for computer use and with some care they can be used to generate
numbers which appear to be random for all practical purposes. A recursive
algorithm that is used to generate random numbers is referred to as a random
number generator (RNG). An RNG should produce fairly uniform numbers
on the range [0, 1] that appear to be independently sampled, dense enough
on the interval [0, 1], and reproducible. In addition, the algorithm should be
ecient and portable to use in simulation programs.
Numerical techniques for random number generation go back to the early
1940's with the mid square method introduced by von Newman and Metropo-
lis. Lehmer (1951) introduced a method referred to as the Linear Congru-
ential Scheme (LCS). Today the most widely used version of LCS is the
Multiplicative LCS, which could be dened by the recursive equation:
Zn = a × Zn−1 mod m,
Zn
Rn = .
m
To generate random numbers one usually species a seed number Z0 as
a starting value. The value of Z1 is then be computed resulting in R1 and
248 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
a = 5 m = 7 Z0 = 9.
Zn = 5 × Zn−1 mod 7.
So that:
and the rst generated random number is 3÷7 ≈ 0.4285714. Table 7.3 below
shows the results obtained by continuing the calculations.
n Zn Rn
0 9
1 3 0.4285714
2 1 0.1428571
3 5 0.7142857
4 4 0.5714286
5 6 0.8571429
6 2 0.2857143
7 3 0.4285714
7.5.1 Denitions
The cumulative distribution function (CDF) of a random variable X is de-
ned by:
FX (x) = Pr{X ≤ x} x ∈ R.
The inverse of the cumulative distribution function FX−1 (called the quan-
tile function ), is given by:
3. Deliver x.
FX
0
x
1
if x ∈ [a, b],
fX (x) = b − a
otherwise.
0
The corresponding cumulative distribution function can be explicitly de-
termined by integrating the probability density function:
if x < a,
0
Z x
x − a
FX (x) = fX (t) dt = if x ∈ [a, b],
−∞
b−a
1 if x > b.
To use the inverse transform method, set FX (x) = y and solve for x as
follows:
x−a
= y =⇒ x = y(b − a) + a.
b−a
Thus, to generate a uniform random deviate on the interval [a, b] we can
use the following process:
2. Set x = y(b − a) + a.
3. Deliver x.
To use the inverse transform method, set FX (x) = y and solve for x on
the intervals [a, c) and [c, b]:
p
if 0 ≤ y < (c − a)/(b − a),
(
a + y(b − a)(c − a)
x= p
b − (1 − y)(b − a)(b − c) if (c − a)/(b − a) ≤ y ≤ 1.
Thus, to generate a triangular random deviate we can use the following
process:
1. Generate a random number y on the unit interval [0, 1].
2. If y < (c − a)/(b , set y(b − a)(c − a); otherwise,
p
− a) x = a +
set x = b − (1 − y)(b − a)(c − a).
p
3. Deliver x.
To use the inverse transform method, set FX (x) = y and solve for x as
follows:
x = −µ ln(1 − y).
Now if y is a uniform random number on the interval [0, 1], then 1 − y
must be also. Thus when generating random deviates, we can replace 1 − y
in the above equation with y and generate an exponential random deviate
using the following process:
2. Set x = −µ ln(y).
3. Deliver x.
So that we have:
0 ≤ fX (x) ≤ c x ∈ [a, b].
The acceptance/rejection method works as follows:
This is a trial and error method (see Figure 7.12 for a graphical represen-
tation). Basically, we are generating a random point on the xy -plane where
the PDF is plotted. If the point falls on or below the PDF curve then it
would be of the same distribution as fX , if not, we try again by generating
another point.
In Figure 7.12, the rst point generated is (x1 , y1 ). Since y1 ≤ fX (x1 ),
this point is accepted and x1 is delivered. The next time a random deviate
is called for, the point generated is (x2 , y2 ). Since y2 > fX (x2 ), this point is
rejected and a new point (x3 , y3 ) is generated, but once again y3 > fX (x3 ),
so this point is also rejected and a new point (x4 , y4 ) is generated. This time
y4 ≤ fX (x4 ), so the point is accepted and x4 is delivered.
Reject
c
y3 (x3 , y3 )
y2 (x2 , y2 )
y1 (x1 , y1 )
Accept
y4 (x4 , y4 )
fX
0
a x3 x1 x4 x2 b
x1 ≤ x2 ≤ · · · ≤ xn−1 ≤ xn .
In which case:
i
F̂n (xi ) = i = 1, . . . , n.
n
7.6. INPUT MODELLING FOR SIMULATION STUDIES 257
1.0
Cumulative Probability
0.8
0.6
0.4
0.2
F̂n
0.0
x1 x2 x3 x4 x5 x6 x7 x 8 x9 x10
Observations
This guideline will usually reveal the general layout of the data provided the
number of cells is in the range of 5 to 15. The most frequently encountered
problem with constructing a histogram is the tendency to specify more cells
7.6. INPUT MODELLING FOR SIMULATION STUDIES 259
than the data can support. Sturges' rule accounts for this in a heuristic
manner.
To illustrate the construction of a histogram for a sample data set, we
present the data shown below of the time (in minutes) it takes to dump
concrete on a oor during a concrete pouring operation:
The parameters for the histogram are now specied. The next step is to go
over the set of observations and count the number that fall into each of the 7
cells. The results of this step are shown in Table 7.4. The histogram is then
constructed as shown in Figure 7.14.
Having constructed an appropriate histogram, the next step in selecting a
distribution as an input model is to relate the shape of the histogram to that
of a known distribution. A somewhat bell shaped" histogram suggests the
use of a normal distribution (truncated for duration models). The histogram
in Figure 7.14 suggests a beta, gamma, or lognormal distribution. After
some experience with input modelling, one should be easily able to relate the
shapes of a histogram to those of the theoretical PDFs.
An even better approach for selecting distributions is to start with a fam-
ily" of distributions. Families like the beta family, the Pearson family, and
the Johnson family (Johnson, 1949) amongst others, give the simulationist
260 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
40 %
Relative Frequency
30 %
20 %
10 %
0%
0.159 0.322 0.485 0.648 0.811 0.974 1.137 1.300
Duration (min)
the exibility of attaining a wide variety of shapes with the same distribution
model. Figure 7.15 shows samples of PDFs from the beta family attained by
varying the shape parameters α and β of the beta distribution.
α = 0.5, β = 0.5
2.5 α = 5.0, β = 1.0
α = 1.0, β = 3.0
2.0 α = 2.0, β = 2.0
α = 2.0, β = 5.0
1.5
1.0
0.5
0.0
0.0 0.2 0.4 0.6 0.8 1.0
if we were to sample another 20 numbers, we would not expect the new set to
have the exact same mean and variance; nevertheless, we would expect the
new mean and variance to still be reasonably close to the theoretical values.
It is of course possible (though highly unlikely) that the mean and variance of
our new data are quite dierent than the theoretical values; this is a random
process after all. However, we can be fairly condent that most of the time
the statistical estimators for our sample will be close to the theoretical ones.
In fact, if we were to generate larger and larger samples we would become
more and more condent of this, until, at the limit (sample size tending to
innity), the estimators for the sample equal the theoretical ones.
Now suppose that we are aware that the above data was sampled from a
normal distribution, but that we do not know the distribution's parameters.
In order to determine them, we might simply assume that the theoretical
mean and variance of the distribution are the same as the mean and variance
of the data, i.e., that the data was sampled from a normal distribution with
a mean of 19.29535 and a variance of 17.50245. When we do this, we are
using the method of moments to nd the parameters of the distribution.
In general, if x1 , . . . , xn are random deviates sampled from a random
variable X , then the j th sample moment of the deviates is dened to be:
n
1X j
m0j = x.
n i=1 i
In particular, X̄ = m01 is the mean of the deviates, and (as is easily veried)
1 is the (population) variance of the deviates.
S 2 = m02 − m02
Next, suppose we believe that X is closely modelled by a certain probabil-
ity distribution with parameters θ1 , . . . , θq and probability density function
fX . The j th moment of the distribution is dened to be:
Z ∞
µ0j (θ1 , . . . , θq ) = xj fX (x; θ1 , . . . , θq ) dx.
−∞
7.6. INPUT MODELLING FOR SIMULATION STUDIES 263
and solve for θ1 , . . . , θq in terms of m01 , . . . , m0q . In this way, we have expressed
θ1 , . . . , θq in terms that can be easily calculated from our sample data.
1 ∞ − µx
Z
= xe dx
µ 0
Z ∞
x ∞
h i x
−µ
= −xe + e− µ dx (integration by parts)
0 0
x ∞
h i
= (0 − 0) + −µe− µ
0
= 0 + (0 + µ)
= µ.
This equation shows that the method of moments estimate for µ is simply
the mean of our data.
1 (x−µ)2
fX (x; µ, σ) = √ e− 2σ2 ,
2πσ 2
and the rst two moments of the distribution are:
µ01 (µ, σ 2 ) = µ,
µ02 (µ, σ 2 ) = µ2 + σ 2 .
µ = m01 = X̄,
σ 2 = m02 − m02 2
1 = S .
Thus, the method of moments estimate for µ is the mean of our samples, and
the estimate for σ 2 is the (population) variance of our samples.
1 x
fX (x; k, θ) = k
xk−1 e− θ ,
Γ(k)θ
7.6. INPUT MODELLING FOR SIMULATION STUDIES 265
where Γ denotes the gamma function. The rst two moments of the distri-
bution are:
m02
1 X̄ 2
k= 0 = 2,
m2 − m02
1 S
m02 − m02
1 S2
θ= = .
m01 X̄
n
X
2
l(µ, σ ) = ln(fX (xi ; µ, σ 2 ))
i=1
n
X 1 (xi −µ)2
−
= ln √ e 2σ 2
i=1 2πσ 2
n
(xi − µ)2
X 1
= ln √ −
i=1 2πσ 2 2σ 2
n
n n 2 1 X
= − ln(2π) − ln(σ ) − 2 (xi − µ)2 .
2 2 2σ i=1
n
∂ 1 X ∂
l(µ, σ 2 ) = − 2 (xi − µ)2
∂µ 2σ i=1 ∂µ
n
1 X
= (xi − µ)
σ 2 i=1
n n
!
1 X X
= xi − µ
σ2 i=1 i=1
n
= (X̄ − µ).
σ2
And since both n and σ 2 are strictly positive, this can only be zero when:
µ = X̄.
268 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
And again, since both n and σ 2 are strictly positive, this can only be zero
when: n
2 1X
σ = (xi − µ)2 .
n i=1
Substituting in µ = X̄ from above gives:
n
2 1X
σ = (xi − X̄)2 = S 2 .
n i=1
Thus, the method of maximum likelihood estimate for µ is the mean
of our samples, and the estimate for σ 2 is the (population) variance of our
samples. Note that for the normal distribution, the estimates obtained by
the method of maximum likelihood are the same as the estimates obtained
by the method of moments. This will not be the case in general.
1.0
Cumulative Probability
0.8
Residual at x6
0.6
0.4
0.2 FX
F̂n
0.0
x1 x2 x3 x4 x5 x6 x7 x 8 x9 x10
Observations
goodness of t test by using statistical tests (like the Chi-Square or the K-S
tests), or visually assessing the quality of the t.
Nj = number of xi ∈ [aj−1 , aj ) j = 1, . . . , k,
and Z aj
pj = fˆX (x; θ̂1 , . . . , θ̂q ) dx j = 1, . . . , k.
aj−1
which should be small" if the t is good. We're left with two important
questions:
7.6. INPUT MODELLING FOR SIMULATION STUDIES 271
1. How small does the test statistic χ2 need to be for the t to be consid-
ered good?
Interpreting the Test Statistic. If the xi 's were actually drawn from the
distribution we're considering, then (in theory) the test statistic χ2 is chi-
squared distributed with k − 1 − q degrees of freedom. This means that we
can determine the probability that the xi 's were drawn from our distribution
by calculating the area of the tail of a chi-squared distribution with k − 1 − q
degrees of freedom as shown in Figure 7.17.
Chi-squared distribution
with k − 1 − q degrees of
freedom
Area = probability
xi 's drawn from
tted distribution
0 χ2
where α is the signicance level of the test and z1−α is the standard normal z -
value corresponding to the probability 1 − α. We then generate the intervals
by setting:
−1 j
aj = F̂X j = 1, . . . , k,
k
where it is possible that a0 = −∞ and/or ak = +∞. By dening the intervals
in this way they become equiprobable, i.e.:
Z aj
1
pj = fˆX (x) dx = j = 1, . . . , k.
aj−1 k
and
− i−1
D = sup{F̂X (x) − F̂n (x) : x ∈ R} = max F̂X (xi ) − : i = 1, . . . , n .
n
The Kolmogorov-Smirnov test statistic is then:
D = max{D+ , D− }.
1.0
Cumulative Probability
0.8
0.6 D−
0.4
0.2 FX
+ F̂n
D
0.0
x1 x2 x3 x4 x5 x6 x7 x 8 x9 x10
Observations
how well the tted CDF tracks the empirical one. Alternatively one can
compare how well the shape of the sample histogram compares to that of
the theoretical PDF. When the CDF is available, it is always better to use
it in your comparison because, as previously mentioned, a histogram can be
easily distorted and can attain any desired shape.
where the ai 's are normalization regression coecients usually obtained from
tables or computer programs, x(i) is the ith ordered statistic, and S 2 is the
sample variance. The value of W is usually compared to the percentile values
of the distribution of the test at a specic level of certainty (see for example
the tabulated values in Hahn and Shapiro (1967)). In subjective terms, the
sample would be normal (or close to normal) when W is close to 1.
Xq = min{x ∈ R : q ≤ FX (x)},
Estimating Probabilities
The probability of completing a job on time is also very valuable in a num-
ber of situations. A classic example would be the simulation of scheduling
networks (e.g., PERT type) in an attempt to determine the probability of
meeting a target date.
The cumulative distribution function FX of an output parameter X tells
us the probability that X does not exceed a particular xed value x:
FX (x) = Pr{X ≤ x} x ∈ R.
278 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
(MTBF) and the Mean Time To Repair (MTTR). Given that the occurrence
of equipment failures is highly uncertain, values for the time between failure
events will also vary. An eective way to numerically represent this variation
is through the use of statistical distributions. The time that it takes to x
broken equipment is another parameter that varies quite a lot. It is also
modelled using a statistical distribution.
The most eective way of studying stochastic phenomena such as equip-
ment failures is by conducting simulation studies. These studies can generate
useful information, such as the number of times that a specic equipment
type broke down, and statistics on how long it took to x. If the simulation
models are developed in an intelligent fashion, they can be congured to gen-
erate information about the optimal number of mechanics and repair bays to
designate in order to achieve preset performance targets, e.g., maximum time
the equipment should wait before repair work can commence, the maximum
number of equipment waiting in a queue, etc.
In typical simulation studies, the statistical distributions used to repre-
sent parameters for equipment failure, repair and maintenance processes are
dened through an input modelling process. This process can be carried out
with the guidance of experts or using data collected on similar past projects.
The following example has been setup to illustrate how these types of
information can be generated for typical equipment maintenance and repair
problems within a practical setting.
Maintenance does not prevent equipment from breaking down. The two
types of events take place independently for all types of equipment.
There are two main kinds of breakdown and maintenance: the rst
kind requires a Heavy Duty Machine (HDM) crew to repair, and the
second requires a welding crew. Table 7.11 summarizes the probability
of each breakdown repair or maintenance occurring for each type of
equipment.
is to obtain the number of servers for HDM and welder resources that pro-
duce satisfactory wait times for equipment requiring service, i.e., less than
one hour.
The last two phases are embellishments to the base model. The rst
of these, i.e., embellishment one, utilizes the optimal number of resource
servers obtained from the base model to determine the hours spent in service
annually by each piece of equipment. Total service times are tracked for an
entire eet for each equipment type. Also, averages for annual service times
are found for a single unit for each equipment type. This phase also seeks
to report the number of times each equipment type required a specic type
of service and the wait times associated with these. The last embellishment,
i.e., embellishment two, investigates the benets of implementing a planned
policy for servicing equipment. This policy involves prioritizing the service of
the equipment type that has the highest need for service annually. The merits
of implementing this policy are evaluated based on improvements obtained
in waiting times for equipment to receive service. Each of these phases is
discussed in detail in the following sections.
squares, while the Kolmogorov-Smirnov (K-S) test is used for testing the
appropriateness of the t.
The statistical distributions chosen for modelling the interval between
equipment failure events based on these two criteria are summarized in Ta-
ble 7.12.
It is important to note that this selection was restricted to statistical
distributions that are bound to the left and that are non-negative because
random deviates sampled used to model duration in simulation cannot be
negative.
Visual inspection of the input modelling results is another common way of
assessing the goodness of t for theoretical distributions. This inspection can
be done by assessing the degree of dispersion of the theoretical distribution
from the empirical distribution on a Probability Density Function (PDF) or
a Cumulative Density Function (CDF). In this example, visual inspection
is performed on CDF. Figure 7.19 shows an overlay of the theoretical and
empirical distributions for the mean time between truck failure events. Beta
is the theoretical distribution chosen in this case.
These distributions are dened as an input to the duration property of
the appropriate Task modelling elements so that Simphony is able to use
these to schedule the failure events, hence, emulating real life failures for the
equipment.
1.0
Cumulative Probability
0.8
0.6
0.4
The mine operates 24 hours a day, 7 days a week, and 365 days a year.
Figure 7.20: Base Model for Mining Equipment Maintenance and Repair
288 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
The original equipment unit entities that arrived at the Generate element
are routed out through the top output port, then into a Counter modelling
element, and nally into a Composite modelling element that contains a
model layout that emulates the operation and maintenance of these entities.
Clones of the original equipment unit entities are routed out of the Generate
modelling element through its bottom output port. These then ow through
a Counter modelling element, and nally into a Composite modelling element
that emulates the operation and repair of the equipment that the entities rep-
resent. The Counter elements are used at these strategic locations to verify
that the right number of entities got to that part of the simulation model.
The model layouts within all Composite elements that emulate equip-
ment operation and maintenance are the same. Also, model layouts within
Composite elements that emulate equipment operation and repair are the
same. Entities routed into them keep owing in a cyclic fashion triggering
the schedule and processing of simulation events until the simulation is ter-
minated. It is important to state that all equipment (i.e., entities) is serviced
by resources (i.e., welders and HDMs) on a First-In-First-Out (FIFO) basis
in this version of the model.
LX (0) = TimeNow
Return True
After truck entities have been time stamped, they are routed out of this
Execute element into a Probabilistic branch element labelled Truck Main-
tenance Type?. At this element, Simphony will route each truck entity out
the top output port with an 82% chance implying that the truck requires
an HDM resource for maintenance. Alternatively, Simphony routes the en-
tity out the bottom output port implying that the truck requires a welder
resource for the maintenance activity.
Truck entities requiring an HDM maintenance are routed into a Com-
posite element labelled Truck HDM Maintenance, while those that require
welder maintenance are routed into a Composite element labelled Truck
Welder Maintenance. The model layouts within these two Composite ele-
ments are identical. That for the Composite labelled Truck HDM Mainte-
nance is presented in Figure 7.22 and discussed.
Truck entities arriving at the maintenance Composite are routed into the
embedded model via the input port labelled Start HDM Maintenance 1.
Each truck entity then ows through a Counter element labelled HDM Main-
tenances 1 then into a Capature element labelled Truck Captures HDM 1.
When a truck entity gets transferred into this Capture element, it requests
one server of the HDM resource (labelled HDM(s)) with a priority of zero.
If there are no servers available, this request is queued in the File element la-
belled HDM(s)Q. When the queued request for the truck entity is fullled,
the truck entity is routed into the Task modelling element labelled HDM
Maintains Truck, where it is retained for 5 hours emulating maintenance
work being done on the truck. After maintenace work is done, the truck en-
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 291
tity is routed into a release modelling element labelled Truck Releases HDM
1, where it releases the server of the HDM resource that had been granted
to it. The truck entity then ows through a Counter element labelled HDM
Maintenances 2, then into an output port labelled End HDM Maintenance
1, where it is transferred out of the truck HDM maintenance Composite.
Truck entities leaving the Composite elements labelled Truck HDM Mainte-
nance, and the Truck Welder Maintenance are transferred into an Execute
element labelled Compute Service Time where they trigger the evaluation
of the following formula:
GX (1) = GX (1) + TimeNow - LX (0)
Return True
This formula evaluates the time that the truck entity was not working
due to maintenance work that it required. It bases this computation on the
value of the current simulation engine time and the value time stamped in the
LX(0) attribute of the entity. Note that this non-working time will include
the time that the truck had to wait for the required resource and the time
that the truck was actually being maintained. Every time this computation is
done, the result is used to obtain a new cumulated value for non-working time
for the truck since the start of simulation. This cumulated value is stored
in a designated global attribute, i.e., GX(1). GX(1) was designated for the
trucks, while other attributes were designated for other types of equipment.
See Table 7.11 for these details. A global attribute is used for this purpose
for two reasons:
The cyclic loop that emulates the operation and subsequent repair of
trucks is identical to that just described, hence, there is no need to discuss
it. There are only two dierences. These include:
The time between truck repairs is sampled from the appropriate statis-
tical distribution dened in the input modelling section of this exercise.
Resetting the Limit property to its default value of zero. This prop-
erty of the Counter labelled Termination Flag (Limit) is used as a
place holder for the termination ag. Initialization of this counter prop-
erty prevents the simulation from being terminated pre-maturely.
R1 . Servers = 1
R2 . Servers = 1
End If
This formula is evaluated every time an entity is transferred into the Ex-
ecute element. In this model, the Create modelling element labelled Create
Entity is setup to release an entity at the start of each simulation run. This
entity is then transferred into the Execute element, triggering the evaluation
of the formula. A check is inserted using an If. . . Then statement to deter-
mine if the rst run is currently being simulated, i.e., it is the start of a new
simulation experiment. If this is the case, all required initialization is done
by the formula.
Server Optimization After the servers for HDM(s) and welder(s) re-
sources have been initialized, the simulation experiment proceeds and is only
terminated when satisfactory results have been obtained from a simulation
294 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
Return True
At the end of each simulation run, the Simphony simulation system eval-
uates the formula that has been presented. In order to determine whether
results from the simulation run that was just completed are satisfactory,
the maximum wait times are retrieved from the statistics of the HDM(s)Q
and Welder(s)Q waiting les. Results are deemed satisfactory when these
maximum times are equal to or less than one hour. If the results are not
satisfactory, the number of servers for the resource that has an undesirable
waiting time is incremented by one and the next simulation run started.
Start
Start
Move
Move to
to the
the next
next run
run index
index
Simphony
Simphony sets
sets run
run
index
index to
to zero
zero
Increase
Increase the
the number
number of
of
welder
welder servers
servers by
by 1.0
1.0
Set
Set the
the limit
limit ofof
Run
Run Index
Index == 0?
0? No
the
the counter
counter to
to 1.0
1.0 Yes
Yes Max.
Max.
Welder(s)Q
Welder(s)Q waiting
waiting
No New
New simulation
simulation time
time >> 1.0
1.0 hr?
hr?
experiment:
experiment: set
set
resource
resource servers
servers to
to one
one
Increase
Increase the
the number
number of
of
HDM
HDM servers
servers by
by 1.0
1.0
No
Yes
Counter
Counter limit
limit == 1.0?
1.0? Yes
Max.
Max.
Yes No HDM(s)Q
HDM(s)Q waiting
waiting
time
time >> 1.0
1.0 hr?
hr?
HaltScenario()
HaltScenario() i.e.
i.e. end
end Execute
Execute the
the
experiment
experiment simulation
simulation run
run No
Max.
Max. HDM(s)Q
HDM(s)Q
Get
Get max.
max. times
times of
of &
& Welder(s)Q
Welder(s)Q waiting
waiting
HDM(s)Q
HDM(s)Q and and time
time <=
<= 1.0
1.0 hr?
hr?
Welder(s)Q
Welder(s)Q
End
End
200
150
100
50
0
0 5 10 15 20 25
Simulation Run
Figure 7.25: Maximum Waiting Time for HDM(s) vs. Simulation Run
Maximum Time for Wedlers (hours)
50
40
30
20
10
0
0 5 10 15 20 25
Simulation Run
Figure 7.26: Maximum Waiting Time for Welders vs. Simulation Run
300 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
30
Number of HDM Servers
20
10
0
0 5 10 15 20 25
Simulation Run
12
Number of Welder Servers
10
8
6
4
2
0
0 5 10 15 20 25
Simulation Run
Statistics for times that equipment have to wait before service on them
commences.
based on waiting times are removed. The only code put in there is for tracking
service hours for equipment and wait times that equipment experience before
they get serviced.
At the top level, the layout of the base model is embellished by the ad-
dition of Composite elements that encapsulate Statistics modelling elements
required to track service hours and wait times. The layout of the Statistics
modelling elements encapsulated within the Composite element that track
service hours for equipment are shown in Figure 7.31.
Global and local attributes within the Simphony simulation system are
setup in such a way that all the values they possess from a previous simu-
lation run are cleared out at the start of a new run and then reset to their
default value. In this simulation exercise, the use of global attributes for the
storage of non-working times for equipment is the obvious choice. However,
global attributes alone will not be sucient because Simphony resets at-
tribute values between simulation runs. Observations stored within statistics
in Simphony persist between simulation runs. Statistic modelling elements
are included in the embellished model for this reason.
During each simulation run, the service hours for equipment are cumu-
lated and stored at their designated global attributes. Then, at the end of
each run, values at each of these global attributes are collected into the appro-
priate Statistic modelling element. This implies that the Statistic modelling
elements will each have observations equal to the total number of simulation
runs executed in a given simulation experiment. Another advantage of using
the Statistics modelling element is that it automatically computes all the
statistics of observations collected to it, which are useful when performing
output analysis.
Details of the attributes designated for tracking the service hours for
equipment in each simulated year are summarized in Table 7.15. For each
type of equipment, a specic global attribute is designated to collect the
cumulative hours spent in repair, maintenance, and both repair and mainte-
nance. This is for the entire eet of a given equipment type.
Attribute Designation
GX(0) The total service hours for all shovels in a given year
GX(1) The maintenance hours for all shovels in a given year
GX(2) The repair hours for all shovels in a given year
GX(3) The total service hours for all trucks in a given year
GX(4) The maintenance hours for all trucks in a given year
GX(5) The repair hours for all trucks in a given year
GX(6) The total service hours for all scrapers in a given year
GX(7) The maintenance hours for all scrapers in a given year
GX(8) The repair hours for all scrapers in a given year
GX(9) The total service hours for all graders in a given year
GX(10) The maintenance hours for all graders in a given year
GX(11) The repair hours for all graders in a given year
GX(12) The total service hours for all loaders in a given year
GX(13) The maintenance hours for all loaders in a given year
GX(14) The repair hours for all loaders in a given year
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 307
Other statements similar to this are used to collect the waiting time
observations for other types of equipment.
300
200
100
0
Shovels Trucks Scrapers Graders Loaders
HDM Repair
Number of Repair Instances
300
200
100
0
Shovels Trucks Scrapers Graders Loaders
and shown in Table 7.17 is the population standard deviation, and our cal-
culation requires the sample standard deviation. We can convert from one
to the other as follows:
r r
n 100
S= σ2 = × 25.6732 ≈ 25.802,
n−1 99
S 25.802
X̄ ± t(1−α/2),(n−1) √ = 9, 643.991 ± t0.975,99 √
n 100
25.802
≈ 9, 643.991 ± 1.984 ×
10
≈ 9, 643.991 ± 5.119.
The condence intervals for the service hours for the dierent types of equip-
ment are summarized in Table 7.18.
Waiting Times The delays associated with equipment waiting to get ser-
viced are tracked using Statistics modelling elements. Results obtained are
summarized in Table 7.19. These results indicate that trucks experience
the longest wait times compared to any other equipment types. This result
is consistent with that which indicates that trucks take the longest time in
service for any given year.
Table 7.18: Condence Intervals for the Annual Service Hours for Equipment
time that the queue length is greater than one are served on a First-In-First-
Out (FIFO) basis. It is assumed that implementing this policy makes more
resources available for the service of each truck hence the time needed to per-
form the service is reduced to a quarter the original length. This translates
into a maintenance time of 1.25 hours and a repair time of 3.0 hours for all
types of service.
It is reasonable to test this policy and assess the possible benets within
a virtual simulation environment before implementing it in the eld. This
is the objective of this embellishment. In order to achieve this, the simula-
tion model developed for embellishment one is modied to account for these
specications.
The model layouts within the Composite elements that emulate the main-
tenance and repair of trucks are modied to account for specications in the
policy. First, the modelling elements between the Counter elements that are
after the ProbabilisticBranch element (that decides the type of service) and
the Execute element (that updates the non-working time) are encapsulated
within new Composite elements. Other additional modelling elements are
added to these to embellish the logic modelled within these new Composite
elements. At a top level, the new model layout within the Composite element
that emulates truck maintenance is shown in Figure 7.35. This model lay-
out is similar to that within the Composite modelling element that emulates
truck repairs.
Implementing these modications to the model in order to accommodate
specications in the described policy can result in logical errors such as dead-
lock, especially in instances where the queue for trucks requiring service is
not emptied appropriately. To avoid this deadlock situation, two resource
elements, each with a single server and le, are added at the top most level
of the simulation model to emulate a constraint that ensures that only one
truck can have all HDMs or all the welders for a given service. Figure 7.36
presents the layout of these Resource elements and their respective les.
Composite modelling elements are introduced into the model to encap-
sulate the elements that are required for each truck to preempt all HDMs or
welders. The model layout shown in Figure 7.37 is used to achieve preemp-
tion of HDMs for truck maintenance. This model layout is similar to that
used within the Composite for the truck repair.
The Capture modelling elements used within the maintenance and repair
Composite elements for the trucks are replaced with Preempt modelling ele-
ments. The default priority for each Preempt is now changed so that trucks
314 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
Figure 7.36: Resources and Files to Permit Safe Preemption of Service Crews
Figure 7.37: Modied Model for Truck HDM Maintenance Service (Embel-
lishment Two)
7.8. EXAMPLE: EQUIPMENT BREAKDOWNS 315
requiring service are assigned a higher priority than other types of trucks.
The Preempt element is used to ensure that servers of a Resource that are
engaged in the service of other equipment types get assigned to the truck that
requires them, without delay. The Preempt modelling element in Simphony
has a restriction that does not allow more than one server of the targeted
resource to be assigned to a preempting entity. In order to overcome this
restriction, the model layout was congured in such a way that truck entities
keep cycling through a loop until the entity has preempted all the entities
that it requires. This loop is comprised of the Preempt element (labelled
Preempt an HDM in Figure 7.37) and a ConditionalBranch modelling ele-
ment (labelled All HDMs Preempted? in Figure 7.37).
A Visual Basic code snippet is embedded within the formula editor of
its Condition property that checks to make sure all the servers have been
preempted by the truck entity. In order for this check to work well, the
number of servers preempted by a truck entity at any point in time must
be locally stored. The LN(0) attribute of the truck entity is designated
for this purpose. Given that each time a truck entity is transferred into this
ConditionalBranch modelling element, it will have a higher number of servers
preempted than it previously had, a statement is included in the code snippet
that increments the LN(0) attribute by one. The following code is used to
perform this increment and to check the fulllment of the preemption of all
required servers. In this code, it is assumed that the truck entity requires
HDMs for service.
LN (0) = LN (0) + 1
If LN (0) = ServersAvailable (" HDM ( s )") + _
ServersInUse (" HDM ( s )") Then
Return True
Else
Return False
End If
After all the servers for a given resource have been preempted, the truck
entity is routed out through the True output port of the ConditionalBranch
modelling element labelled All HDMs Preempted? and proceeds into a Task
modelling element labelled Maintain Truck that emulates the maintenance
work being done on the truck.
After the truck entity is released from the Task element, it is routed into
modelling elements that release all the resources assigned to the entity. It is
rst transferred into a Release element (release labelled Release Preempted
316 CHAPTER 7. STATISTICAL ASPECTS OF SIMULATION
HDMs in Figure 7.37) that frees all the resource servers that the entity
preempted. Information about the number of servers to free is retrieved
from the LN(0) attribute of the truck entity, and then LN(0) is reset to its
default value, i.e., zero. The following formula is inserted into the Servers
property of this Release element to achieve this behaviour.
Dim X As Integer = LN (0)
LN (0) = 0
Return X
given year is used as a test. Table 7.22 summarizes average values for this
metric for embellishment one and embellishment two models.
Values summarized in Table 7.22 indicate that implementing the policy in
which the service of trucks is prioritized over other equipment types results
in a 20.45% reduction in the average time that all equipment spend in service
annually. This means that equipment will spend more time in production
resulting in better performance on the reclamation project. This overall
reduction arises from a reduction in annual service hours for trucks.
Conclusion
Comparison of the results obtained in embellishment one and those from
this embellishment conrm the viability of implementing this policy and its
potential benets for the project.
This embellishment, i.e., embellishment two, also demonstrated how dis-
crete event simulation can be used to support decision making processes,
eliminating the need to follow gut feelings or experimenting with the real
system to know whether a given policy will yield good results.
319
320 APPENDIX A. SIMPHONY.NET USER'S GUIDE
Installation Procedures
To install Simphony.NET, run the MSI le from the distribution set and
follow the instructions.
Simphony models have the le extension *.sim. Simphony can be started
by:
2. Start > All Programs > Simphony.NET 4.0 > Modelling Environment;
or
Modelling Surface
The Modelling Surface is the main workspace for building simulation models.
Modelling elements are placed on the Modelling Surface by dragging them
from the Template Palette. The Modelling Surface can accommodate multi-
ple tabs thus allowing dierent portions of the model to be quickly accessed.
Model Explorer
The Model Explorer displays a navigation tree representing the structure of
the current simulation project. The root of the tree is always the model
itself. Under the model, one or more scenarios can be created. Each scenario
contains slightly dierent versions of the same simulation model that can
be compared after the simulation has been run. For example, two scenarios
could contain the same model, but one is congured for crews working an 8
hour shift and the other for a 10 hour shift. Underneath the scenarios will be
the hierarchical structure of the model. Double clicking on an entry in the
tree view will bring up the corresponding portion in the Modelling Surface.
Template Palette
The Template Palette displays a list of all elements available in the mod-
elling element library which can be used to construct new projects. These
elements are categorized by the templates to which they belong or folders
within the templates. Users can add special purpose templates by select-
ing the Add Template item under the File menu and looking in the \Sim-
phony.NET 4.0\Templates\ folder.
Property Grid
The Property Grid displays the properties of the scenario or modelling el-
ement selected on the Modelling Surface. Users can specify the name and
input parameters of scenario or modelling elements. Some of the input pa-
rameters can be dened using user dened code either by Visual Basic or
C#. The associated output for each scenario or modelling element is also
displayed as well as the relevant statistics. The detailed property information
of each element can be found in the associated template manual.
Trace/Debug/Error Windows
This window consists of three tabs: The Trace tab display's trace messages
generate by the model during simulation; the Debug tab displays similar
324 APPENDIX A. SIMPHONY.NET USER'S GUIDE
messages intended to assist with debugging a model; and the Errors tab
displays any integrity errors/warnings that may be present in the model.
File Menu
Item Description
New (Ctrl+N) Creates a new simulation model.
Open. . . (Ctrl+O) Opens an existing simulation model.
Save (Ctrl+S) Saves the current simulation model.
Save As. . . (F12) Save the current model under a dierent le name.
Add Template. . . Adds a template to the Template Palette.
Remove Template Removes the selected template from the Template Palette.
Add Scenario Adds a new scenario to the model.
Remove Scenario Removes the current scenario from the model.
Print Preview. . . Previews printing of the Modelling Surface.
Print. . . (Ctrl+P) Prints the Modelling Surface.
Recent Files Opens a recently edited simulation model.
Exit (Alt+F4) Closes Simphony.
Edit Menu
Item Description
Undo (Ctrl+Z) Undoes the last action.
Redo (Ctrl+Y) Redoes the last action.
Cut (Ctrl+X) Cuts the selected element(s).
Copy (Ctrl+C) Copies the selected element(s).
Paste (Ctrl+V) Pastes copied element(s) onto the Modelling Surface.
Delete (Del) Deletes the selected element(s).
Select All (Ctrl+A) Selects all elements on the Modelling Surface.
Copy Modelling Surface Copies the entire Modelling Surface as an image.
Simulation Menu
Item Description
Run (F5) Executes the simulation model.
Pause Pauses execution of the simulation model.
Halt Terminates execution of the simulation model.
Check (F7) Performs an integrity check of the simulation model.
A.3. DEVELOPING SIMULATION MODELS 325
Results Menu
Item Description
Statistics. . . Displays the statistics report.
Costs. . . Displays the costs report.
Emissions. . . Displays the emissions report.
View Menu
Item Description
Refresh (Ctrl+R) Redraws the Modelling Surface.
Organize Attempts to aesthetically organize the Modelling Surface.
Zoom. . . Opens the Zoom dialog.
Zoom to 100% Zooms the modelling surface to 100%.
Zoom to Selection Zooms the modelling surface to the selected elements(s).
Restore Default Layout Restores default layout of user interface elements.
Options. . . Opens the Options dialog.
Help Menu
Item Description
About. . . Opens the About dialog.
Grid
GridSize: A pair of numbers indicating the horizontal/vertical distance be-
tween grid lines.
Inputs
(Name): The name of the scenario.
Seed: The seed value for the pseudo-random number generator. If this is set
to zero, the pseudo-random number generator will be seeded using the
system time, which will result in a dierent sequence of pseudo-random
numbers each time the scenario is executed. Setting this to a non-zero
value will result in the same sequence being generated each time the
scenario is executed, i.e., the results will be identical each time.
StartDate: The date at which simulation will begin: simulation time zero
will correspond to midnight on this date.
TimeUnit: The time unit one unit of simulation time corresponds to.
A.3. DEVELOPING SIMULATION MODELS 327
Reports
Statistics
Design The Design category contains the name of the modelling element
together with a description.
Inputs This category contains properties that aect the simulation be-
haviour of the modelling element. For example, a modelling element repre-
Layout The Layout category contains properties that specify the location,
size, and colour of a modelling element.
Outputs The Outputs category contains properties that display the results
of a simulation. They dier from statistics (below) in that they only display
a single value for the most recent run.
Connection Points
Most modelling elements in Simphony will have connection points that allow
entities to ow into and out of the element. Connection points at which
entities ow into a modelling element will point towards the element, while
connection points at which entities leave a modelling element will point away
from it. The orientation of a modelling element's connection points can
be changed by right-clicking on the element and selecting either the Rotate
Ports 180
◦
, the Rotate Ports 90◦ Clockwise, or the Rotate Ports 90◦ Counter-
Clockwise item from the context menu as shown in Figure A.4.
Relationships
Relationships dene the direction that entities ow through a model. Rela-
tionships can be created between modelling elements by dragging the output
point of one element to the input point of another as shown in Figure A.5.
Summary Reports
A high-level view of the simulation results can be accessed from the three
reports available under the Results menu:
The Statistics report summarizes all of the statistics collected dur-
ing simulation. It breaks the statistics into ve groups: non-intrinsic
statistics, intrinsic statistics, counters, resources, and waiting les (i.e.,
queues).
The Costs report summarizes all cost information that was collected
during simulation. This report is broken down into the various cost
categories specied when the costs were collected.
The Emissions report summarizes all emission information that was
collected during simulation.
Statistics Report
Date: Sunday, January 25, 2015
Project: Model
Scenario: Scenario1
Run: 1 of 1
Non-Intrinsic Statistics
Element Mean Standard Observation Minimum Maximum
Name Value Deviation Count Value Value
Scenario1 (Termination Time) 60,090.864 0.000 1.000 60,090.864 60,090.864
Counters
Element Final Overall Average First Last
Name Count Productivity Interarrival Arrival Arrival
Chainage 1,227.000 0.020 48.989 30.000 60,090.864
Resources
Element Average Standard Maximum Current Current
Name Utilization Deviation Utilization Utilization Capacity
Crane 42.8 % 49.5 % 100.0 % 0.0 % 1.000
TBM 61.3 % 48.7 % 100.0 % 100.0 % 1.000
Track 100.0 % 0.0 % 100.0 % 100.0 % 1.000
Waiting Files
Element Average Standard Maximum Current Average
Name Length Deviation Length Length Wait Time
CraneQ 0.000 0.000 1.000 0.000 0.000
TrackQ 0.572 0.495 1.000 1.000 27.969
TrainQ 0.000 0.000 1.000 0.000 0.000
Both the Trace and Debug Windows have a toolbar at the top. The rst
toolbar item on both windows is a combo box that allows you to enable or
disable trace (or debug) output. By default, trace output is enabled and
debug output is disabled. When utilizing trace (or debug) output, keep in
mind that it will have a great impact on simulation performance. Running
a sophisticated simulation model will take considerably longer if trace (or
debug) output is enabled.
The next toolbar item on the Trace Window is a combo box that species
which trace categories should be displayed. Whenever a trace message is
generated it can (optionally) be associated with a trace category. The combo
box allows you to lter the trace output to show only a particular category
of interest. Note that there is no such combo box on the Debug Window as
debug messages are simply trace messages associated with the special Debug
category.
334 APPENDIX A. SIMPHONY.NET USER'S GUIDE
Finally, on both the Trace and Debug Windows, the toolbar provides
buttons that allow you to save the trace (or debug) output to a text le,
send it to a printer, or copy it to the clipboard.
Output Properties
Many modelling elements have output properties that can be viewed in the
Property Grid under the Outputs category. When reviewing output prop-
erties of scenarios congured for multiple runs, keep in mind that the value
displayed is for the last run that was executed.
Statistical Properties
Many modelling elements have statistical properties that can be viewed in
the Property Grid under the Statistics category. Unlike output properties,
statistical properties can summarize information across multiple runs. In the
337
338 APPENDIX B. VISUAL BASIC INTRODUCTION
it is doing. For the purposes of this tutorial we will use Simphony's trace
window for output. The command needed to write to the trace window is
called TraceLine. To illustrate, here is the code for the traditional Hello,
World! program as the Expression formula of our Execute element:
Public Partial Class Formulas
Public Shared Function Formula ( . . . ) As System . Boolean
TraceLine (" Hello , World !")
Return True
End Function
End Class
Let's examine this formula line by line. The rst and last lines dene a
class that will contain not only this formula, but all other formulas used by
a model. These two lines will be present in every formula you write, and
should never be modied. All of your Visual Basic code will be placed inside
this class denition. Next, the second and fth lines dene the function that
represents our formula. As with the class denition, these two lines will be
present in every formula you write, and should never be modied. Unlike the
class denition, however, they will vary between formulas. In particular, the
return type of the formula can change. The return type of the formula above
is System.Boolean, which means a boolean true/false value. Henceforth, we
will omit these four lines from our code listings.
The most important lines for us are the third and fourth. The third
line is the call to the TraceLine command, which takes a single parameter
specifying what should be written to the Trace Window. In this case we
are specifying the text string Hello, World!. The fourth line begins with a
Return statement, which is a special statement in Visual Basic that indicates
that what follows is the return value of the formula, and that processing
of the formula is over. In this case we are returning the value True, which
indicates to the Execute element that the entity being processed should be
passed on to subsequent modelling elements. All of the formulas that we
write in this chapter will end with such a Return statement.
When the model is run inside Simphony, the appropriate trace output is
generated, as shown in Figure B.2.
B.2 Comments
All programming languages support comments that allow you to add text
to your code that makes it easier to understand. In Visual Basic, comments
begin with a single quotation mark (\textquotesingle), after which everything
until the end of the line is considered a comment and is ignored by Visual
Basic. Here is the Hello, World! program with a comment:
' Write the phrase " Hello , World !" to trace output .
TraceLine (" Hello , World !")
Return True
We will use comments frequently in this chapter to make our examples easier
to understand.
B.3 Variables
In Visual Basic, variables are the tools that allow you to perform calculations.
They correspond in many ways to the cells of a spreadsheet. Every variable
has both a name and a data type. Variable names must begin with a letter,
and may thereafter contain letters, digits, and (infrequently) the underscore
character. The most commonly used data types for variables are shown in
the Table B.1.
Before they can be used, variables must be declared using the Dim keyword
(which is short for Dimension). The syntax for the Dim keyword is:
Dim < Name > As < Data Type >
Normally, when a variable is declared it is initialized to a specic value using
the assignment operator (=); if this is not done, the variable will have its
default value as shown in Table B.1. Here are some examples of declaring
variables:
340 APPENDIX B. VISUAL BASIC INTRODUCTION
Data Operators
Type
Boolean Not (logical negation), And (logical and), Or (logical in-
clusive or), Xor (logical exclusive or)
Integer - (negation), + (addition), - (subtraction), * (multiplica-
tion), / (division), Mod (modulus), (exponent)
Double - (negation), + (addition), - (subtraction), * (multiplica-
tion), / (division), Mod (modulus), (exponent)
String & (concatenation)
Data Operators
Type
Boolean = (equality), <> (inequality)
Integer = (equality), <> (inequality), < (less than), > (greater
than), <= (less than or equal to), >= (greater than or
equal to)
Double = (equality), <> (inequality), < (less than), > (greater
than), <= (less than or equal to), >= (greater than or
equal to)
String = (equality), <> (inequality), < (less than), > (greater
than), <= (less than or equal to), >= (greater than or
equal to)
B.4. OPERATORS 341
In Visual Basic, variable names are not case-sensitive. This means that
you can refer to the example variable named X2 by either X2 or x2. It is
good idea, however, to get into the habit of being as consistent as possible
with the case of variable names, as some programming languages (e.g., C#,
Java and Python) are case sensitive and would consider X2 and x2 to be
dierent variables.
B.4 Operators
Variables are manipulated using operators. Table B.2 lists the most com-
mon operators for each data type. Here are some examples of using these
operators:
' BOOLEAN VARIABLES : R will be assigned a value of True
' if and only if P and Q are both False ; otherwise it
' will be assigned a value of false .
R = Not ( P Or Q )
D = -A * ( B + C )
The above operators will (normally) evaluate to a value of the same type
as their operands. Visual Basic supports another set of operators that al-
ways evaluate to a boolean value regardless of their operands. These are the
comparison operators, and they are summarized in Table B.3.
For text strings, a string S1 is considered to be less than another string
S2, if S1 precedes S2 when sorted alphabetically. Similarly, S1 is considered
to be greater than S2, if S1 follows S2 when sorted alphabetically.
There are several examples of using comparison operators in the section
on conditional statements below.
Function Description
CBool(<Argument>) Converts the specied argument to a boolean.
CInt(<Argument>) Converts the specied argument to an integer.
CDbl(<Argument>) Converts the specied argument to a double.
CStr(<Argument>) Converts the specied argument to a string.
End If
B.7 Loops
Loops allow you to repeat the same block of code multiple times. Visual
Basic supplies a number of dierent types of looping constructs. The sim-
plest of these is the While...End While statement, which causes ow of exe-
cution to loop as long as a certain condition is satised. The syntax for the
While...End While statement is:
While < Condition >
< Statements >
End While
And here's an example of using a While...End While statement:
' Writes the text " Hello , World !" to trace output a
' number of times equal to the value of the variable N .
' When the loop exits , the variable N will have a value
' of 0.
While N > 0
TraceLine (" Hello , World !")
N = N - 1
End While
Another commonly used looping construct is the For...Next statement,
which allows you to repeat a block of code a specied number of times.
It diers from the While...End While statement in that you must supply a
counter variable. The syntax of the For...Next statement is:
For < Counter > As < Data Type > = < Start > To < Finish >
< Statements >
Next
The following example is similar to the one above for the While...End While
statement, but illustrates how the counter variable can be used inside the
body of the loop:
' Writes the text " Hello , World !" to trace output a
' number of times equal to the value of the variable N .
' This time , the phrase is prefixed by the iteration
' number . When the loop exits , the value of the variable
' N will not have changed .
For I As Integer = 1 To N
TraceLine ( CStr ( I ) & " Hello , World !")
End While
346 APPENDIX B. VISUAL BASIC INTRODUCTION
Appendix C
Formula Properties and Methods
347
348 APPENDIX C. FORMULA PROPERTIES AND METHODS
C.6 Mathematics
System.Math.Cos(d) Returns the cosine of an argument, d, specied in
units of radians.
350 APPENDIX C. FORMULA PROPERTIES AND METHODS
353
354 REFERENCES
ISBN 978-1-55195-357-1