Professional Documents
Culture Documents
Simi - Final
Simi - Final
UNIT-I
DEFINITION OF SIMULATION
Simulation, is “the process of designing a model of a real system and conducting experiments with this model
for the purpose of understanding the behavior of the system or of evaluating various strategies for the
operation of the system.
identifies four main classes of system:
Natural systems: systems whose origins lie in the origins of the universe, e.g. the atom, the Earth’s weather
system and galactic systems.
Designed physical systems: physical systems that are a result of human design, e.g. a house, a car and a
production facility.
Designed abstract systems: abstract systems that are a result of human design, e.g. mathematics and
literature.
Human activity systems: systems of human activity that are consciously, or unconsciously, ordered, e.g. a
family, a city and political systems.
1.3.2 The advantages of simulation
Time can be compressed or expanded to allow for a speed-up or slow-down of the phenomenon( clock is
self-control).
Insight can be obtained about interaction of variables and important variables to the performance.
Bottleneck analysis can be performed to discover where work in process, the system is delayed.
A simulation study can help in understanding how the system operates.
Dis- advantages of simulation
Model building requires special training.
Vendors of simulation software have been actively developing packages that contain models that only need
input.
Simulation results can be difficult to interpret.
Simulation modeling and analysis can be time consuming and expensive.
Entities, activities and processes and the order in which they engage, together define the logical
structure of the model. This leads to the model’s state transition and to the representation of each state by a
set of variables which suits the purpose of the simulation study.
The logical structure of the model is commonly represented diagrammatically. There are several general
and DE-specific diagrammatical techniques and tools, such as Activity Cycle diagrams, Event Graphs and
networks that facilitate the conception and the design of the structure of the model.
entities: job seeker and the Administrative Officerof the Employment Service Centre. An activity cycle
diagram was drawn for each of them and then the two diagrams were merged into a complete activity
cycle diagram.
figure2.6
. N, Q and S are state variables In next-event or event-driven simulations, time advances in steps of variable
length, i.e. time advances when an event triggers a state transition
In next-event or event-driven simulations, time advances in steps of variable length, i.e. time advances
when an event triggers astate transition. Fig.2.7 represents the state transition of the same single server
queue system but, in this simulation, time is controlled by the next event technique.
Figure 2.7
Pg-4
The state of the system is plotted only when a change occurs. Thus, for example on the sixth hour, the arrival
of one more customerincreased the number of customers that have entered the system (N) and the number
of customers waiting in the queue (Q) and did not change the server’s status (S) as it was busy
serving a preceding customer.
A discrete event simulation moves forward by allowing each entity to choose the next time it needs to do
something (be fully assembled, change to a different workstation, and so on). For this type of modeling it is
desirable to have a clock that jumps from time to time and only does the things that need doing at that time.
It is possible to make Time jump from place to place in Vensim. However at each time everything gets done.
This is very inefficient for discrete even style simulation where only one out of say 100,000 things might
need to be computed.
CONCEPTUAL MODELLING ?
Conceptual modelling is about abstracting a model from a real or proposed system. All simulation models are
simplifications of reality. Conceptual modelling is the process of creating the conceptual model. Based on the
definition given above this requires the following activities:
• Understanding the problem situation (a precursor to conceptual modelling)
• Determining the modelling and general project objectives
• Identifying the model outputs (responses)
• Identify the model inputs (experimental factors)
The modelling objectives are specific to a particular model and modelling exercise. Different modelling
objectives lead to different models within the same problem situation. the modelling objectives are intrinsic
to the description of a conceptual model. Without the modelling objectives, the description of a conceptual
model is incomplete.
The inputs (or experimental factors) are those elements of the model that can be altered to effect an
improvement in, or better understanding of, the problem situation. They are determined by the objectives.
The outputs (or responses) report the results from a run of the simulation model. These have two purposes:
first, to determine whether the modelling objectives have been achieved; second, to point to reasons why the
objectives are not being achieved, if they are not. 5
The model content consists of the components that are represented in the model and their interconnections.
The content can be split into two dimensions (Robinson, 1994):
• The scope of the model: the model boundary or the breadth of the real system that is to be included in the
model.
• The level of detail: the detail to be included for each component in the model’s scope. The model content is
determined, in part, by the inputs and outputs, in that the model must be able to accept and interpret the
inputs and to provide the required outputs. The model content is also determined by the level of accuracy
required. More accuracy generally requires a greater scope and level of detail.
Assumptions are ways of incorporating uncertainties and beliefs about the real world into the model.
Simplifications are ways of reducing the complexity of the model.
Pg-8
6.3 Methods of Model Simplification
Simplifications are not assumptions about the real world, but they are ways of reducing the complexity of a
model. Model simplification involves reducing the scope and level of detail in a conceptual model either by:
-removing components and interconnections that have little effect on model accuracy or by:
representing more simply components and interconnections while maintaining a satisfactory level of
model accuracy.
The main purpose of simplification is to increase the utility of a model while not significantly affecting its
validity or credibility.
6.3.1 Aggregation of model components
Aggregation of model components provides a means for reducing the level of detail. Two specific approaches
are described here: black-box modelling and grouping entities.
Black-box modelling
In black-box modelling a section of an operation is represented as a time delay. Model entities that represent
parts, people, information and such like enter the black-box and leave at some later time. This approach can
be used for modelling anything from a group of machines or service desks to a complete factory or service
operation.
Grouping entities: a simulation entity can represent a group of items. This is particularly useful when there is
a high volume of items moving through a system.
6.3.6 Splitting models
A simple way of achieving this is to split the models such that the output of one sub-model (model A) is the
input to another (model B), see Figure 6.3. As model A runs, data concerning the output from the model, such
as output time and any entity attributes, can be written to a data file. Model B is then run and the data read
such that the entities are recreated in model B at the appropriate time.
The advantage of splitting models is that the individual models run faster.
Where splitting models is not so successful is when there is feedback between the models. For instance, if
model B cannot receive entities, because the first buffer is full, then it is not possible to stop model A
outputting that entity, although in practice this is what would happen. It is best, therefore, to split models at a
point where there is minimal feedback, for instance, where there is a large buffer.
OBTAINING DATA
• Three types of data :
Category A -- data are available either because they are known or because they have been collected
previously.
– for eg: physical layout of a manufacturing plant and the cycle times of the machines
– it is important to ensure that the data are accurate and in the right format
Category B -- data need to be collected. for eg :service times, arrival patterns, machine failure rates and
repair times, and the nature of human decision-making data collection exercise can be done by either
getting people or electronic systems to monitor the operation.they are called as data providers.it is
important to ensure that the data obtained are both accurate and in the right format.
Category C -- data are not available and cannot be collected.These often occur because the real world
system does not yet exist or lack of Time availability for eg : machine repair in observation in 6 weeks .
Table : Categories of data availability and collectability
Category A Avai;lable
Category B Not availablebt collect
Category C Not available and not collect
Dealing with unobtainable (category C) data :
• two ways --
1. Estimate the data --
– The data may be estimated from various sources. It may be possible to obtain surrogate data
from a similar system in the same, or another organization.
– For some processes standardized data exist.
– Discussion with subject matter experts, such as staff and equipment suppliers
– make an intelligent guess.
– it creates uncertainty about the validity of the model and will reduce its credibility.
2. Treat the data as an experimental factor--
– Instead of asking what the data are ?, ask: what do the data need to be?
– example of machine failure ......
– need to have some control over the data
Pg-10
7.4 Representing Unpredictable Variability
• Unpredictable (or random) variability,is at the heart of simulation modelling.
• Many operations are subject to such variability, for eg, customer arrivals, service and processing
times and routing decisions.
• There are three options to represent the variability :
– Traces,
– Empirical Distributions and
– Statistical Distributions.
– fourth option for modelling unpredictable variability, known as Bootstrapping
Data accuracy
• Available and collected data need to be checked for the purpose of accuracy.
• if Available then ask questions
• if data is too inaccurate then an alternative is to take expert judgement and analysis
• data might be cleaned by removing outliers and obvious errors
• data could be treated as category C.
Data format
• Data need not just be accurate, they also need to be in the right format for the simulation.
• Time study data are aggregated to determine standard times for activities.
• In a simulation the individual elements (e.g. breaks and process inefficiencies) are modelled
separately.
As a result, standard time information is not always useful (in the right format) for a simulation model.
Traces enable a simulation to represent historic events in the real system exactly as they occurred.
• A trace is a stream of data that describes a sequence of events
• it holds data about the time at which the events occur.and data about the events such as the type of
part to be processed or the nature of the fault.
• The data are normally stored in a data file or a spreadsheet.
• It is obtained by collecting data from the real system.
• For instance, in a call centre the call handling system collects data on call arrivals, call routes and call
durations.
Traces may also help to improve the credibility of a model, since the clients can see patterns they recognize
occurring in the model and the approach is more transparent than using random numbers to sample from
distributions.
The use of empirical distributions It shows the frequency with which data values, or ranges of data values,
occur .
• It is represented by histograms or frequency charts.
• They are normally based on historic data. It is also formed by summarizing the data in a trace.
• If the empirical distribution represents ranges of data values, then it should be treated as continuous
distribution
• so values can be sampled anywhere in the range.
It does not use up large quantities of computer memory. Because a sampling process is employed, based on
the use of random numbers, it is possible to change the stream of random numbers and alter the pattern of
variability in the model. This is an important advantage and is the basis for performing multiple replications
• many standard statistical distributions available but most common one is normal distribution
• Two other continuous distributions are:
– negative exponential distributions and
– Erlang distributions.
• Negative exponential-- distribution has only one parameter, the mean.
• It is derived from random arrival processes.
• It used to sample time between events, such as, the inter-arrival time of customers and the time
between failure of machines
• The negative exponential distribution gives a high probability of
sampling values of x close to zero and a low probability of sampling higher values.
When modelling inter-arrival times this implies that the majority of arrivals occur quite close together
with occasional long gaps.
Erlang distribution
• It has two parameters, the mean and a positive integer k.
• k determines the skew that is the length of the tail to the right.
• If k is one, the Erlang distribution is the same as a negative exponential distribution.
• As the value of k increases skew reduces and hump of the distribution moves towards the center.
Typically values of k between 2 and 5 are used.
It maintain some of the advantages of empirical distributions, namely, the limited use of computer memory
and the ability to change the random streams and so perform multiple replications. The use of a statistical
distribution is an attempt to represent the population distribution rather than just a sample as given by an
empirical distribution or trace. The statistical distribution should give the full range of variability that might
occur in practice.
ž Conceptual Model Validation: determining that the content, assumptions and simplifications of the
proposed model are sufficiently accurate for the purpose at hand. The question being asked is: does the
conceptual model contain all the necessary details to meet the objectives of the simulation study?
ž Data Validation: determining that the contextual data and the data required for model realization and
validation are sufficiently accurate for the purpose at hand. As shown in Figure 12.1, this applies to all stages
in a simulation study, since data are required at every point.
ž White-Box Validation: determining that the constituent parts of the computer model represent the
corresponding real world elements with sufficient accuracy for the purpose at hand. This is a detailed, or
micro, check of the model, in which the question is asked: does each part of the model represent the real
world with sufficient accuracy to meet the objectives of the simulation study?
ž Black-Box Validation: determining that the overall model represents the real world with sufficient
accuracy for the purpose at hand. This is an overall, or macro, check of the model’s operation, in which the
question is asked: does the overall model provide a sufficiently accurate representation of the real world
system to meet the objectives of the simulation study?
Pg-12
ž Experimentation Validation: determining that the experimental procedures adopted are providing results
that are sufficiently accurate for the purpose at hand. Key issues are the requirements for removing
initialization bias, run-length, replications and sensitivity analysis to assure the accuracy of the results.
Further to this, suitable methods should be adopted for searching the solution space to ensure that learning
is maximized and appropriate solutions identified.
ž Solution Validation: determining that the results obtained from the model of the proposed solution are
sufficiently accurate for the purpose at hand. This is similar to black-box validation in that it entails a
comparison with the real world. It is different in that it only compares the final model of the proposed
solution to the implemented solution.
Consequently, solution validation can only take place post implementation and so, unlike the other forms of
validation, it is not intrinsic to the simulation study itself. In this sense, it has no value in giving assurance to
the client, but it does provide some feedback to the modeller.
In Figure 12.4 data have been obtained for two inputs (X and Y) and one output (Z). A simple model is
proposed, Z = X + Y. By checking only the black-box validity it would seem that this model is correct since X =
2, Y = 2 and Z = 4. However, by checking the detail of the model (white-box validation), it might be discovered
that the relationship is wrong and in fact it should be Z = XY. Albeit that both models give the same result
under current conditions, a simple change of the inputs to X = 3 and Y = 3 would lead to a 50% error in the
result if the wrong model is used. This type of error can only be guarded against by testing both white-box
and black-box validity.
Another danger in relying on black-box validity alone is it can lead to the temptation to calibrate the model,
that is, to tweak the model inputs until the simulation provides the correct output. Although this can be
useful if performed in an intelligent fashion, paying attention to white-box validity as well, in isolation it can
lead to a simulation that is unrepresentative of the system it is trying to model.
12.4.5 Experimentation validation
Assuring the accuracy of simulation experiments requires attention to the issues of initial transient effects,
run-length, the number of replications and sensitivity analysis. Also, the search of the solution space should
be sufficient to obtain an adequate understanding and identify appropriate solutions.
12.4.6 Solution validation
The aim of all modelling and verification and validation efforts is to try and assure the validity of the final
solution. Once implemented, it should be possible to validate the implemented solution against the model’s
results. solution validity should also involve checking whether the implemented solution is indeed the most
suitable
Welch’s method
Welch proposes a method based on the calculation and plotting of moving averages.
This involves the following steps:
ž Perform a series of replications (at least five) to obtain time-series of the output data.
ž Calculate the mean of the output data across the replications for each period (Yi).
ž Calculate a moving average based on a window size w (start with w = 5).
ž Plot the moving average on a time-series.
ž Are the data smooth? If not, increase the size of the window (w) and return to the
previous two steps.
ž Identify the warm-up period as the point where the time-series becomes flat.
The moving averages are calculated using the following formula:
where:
Yi(w) = moving average of window size w
Yi = time-series of output data (mean of the replications)
i = period number
m = number of periods in the simulation run.
Welch’s method the aim should be to select the smallest window size that gives a reasonably smooth line.
Although selecting a larger window size will give a smoother line, it also tends to give a more conservative
(longer) estimate of the warm-up period.
Welch’s method requires the calculation of moving averages, it is relatively simple to use. It has the
advantage that the calculation of the moving average smooths out the noise in the data and helps to give a
clearer picture of the initial transient.
SIMULATION OPTIMIZATION
In simulation optimization the aim is to find the factor/level combination that gives the best value for a
response, that is the maximum or minimum value. The problem can be thought of in similar terms to
standard mathematical optimization methods. First, there is some objective function to be optimized,
typically, cost, profit, throughput or customer service. Secondly, there is a set of decision variables that can
be changed; in simulation these are the experimental factors. Finally, there are a series of constraints within
which the decision variables can be changed; this is expressed in terms of the range within which the
experimental factors can be altered.
A common use of simulation optimizers is to leave them to perform an intelligent search over an extended
period, for instance, overnight.
simulation optimization provides some significant advantages in terms of searching the solution space, there
are also some shortcomings in its use:
ž It can lead to a concentration on instrumental learning (what is the result?) rather than conceptual learning
(why is that happening?). The latter is a key benefit of simulation.
ž It deals with a single objective function when there may be multiple objectives.
Sensitivity Analysis
In sensitivity analysis the consequences of changes in model inputs are assessed. In this context model
inputs are interpreted more generally than just experimental factors and include all model data.
The input (I) is varied, the simulation run and the effect on the response is measured. If there is a significant
shift in the response (the gradient is steep), then the response is sensitive to the change in the input. If there
is little change (the gradient is shallow), then the response is insensitive to the change.
Pg-18
Sensitivity analysis is useful in three main areas:
ž Assessing the effect of uncertainties in the data, particularly category C data
ž Understanding how changes to the experimental factors affect the responses.
ž Assessing the robustness of the solution.
All of these can provide useful information to both the model user and the clients by improving their
understanding of the model.
Generic models
A generic model is a simulation of a particular context that can be used across a number of organizations. For
instance, a generic model could be developed of a check-in desk area at an airport. The same model could be
used for other parts of the airport and for other airports. These models are normally quite focused, aiming to
address only a limited number of issues, for instance, effects of different departure schedules and staff
rosters.
A key problem with developing generic models is being able to represent a system across a range of
organizations
Reusable models/components
Generic models are a special case of reusable models/components. A reusable model implies that a complete
model is used in another context and/or for another purpose to that for which it was originally intended
(note that proper use of a generic model entails using a model in another context for the same purpose as
originally intended). A reusable component involves using a part of a model, normally as part of a new
simulation model, in another context and/or for another purpose.
Pg-19
2.4 System Dynamic (SD) Modeling and Simulation (M&S)
System dynamics (SD) is an analytical M&S method. This method is based on Euler’s approximation to solve a
differential equation as discrete sub-sections of a continuous time interval.
SD method is designed to model the behavior of changing system states over time. In SD, instead of
independent and identically distributed (IID) entities, the user deals with homogenized entities, stocks
(accumulations) and flows, which continually interact over time to form a unified whole [6]. SD is a toolset of
learning and understanding how a system’s behavior, policy or strategy changes over time.
The main advantage of this method is its ability to focus on the aggregate effect, rather than the individual
effect of individual entities. It allows for modeling the mathematics, the relationships, and each of the causal
dependencies in a dynamic system.
SD method deals effectively with parallel synchronization challenges by updating all variables and
increments with positive and negative feedbacks, and time delays that compose the interactions in short
amount of time.
The random number generator (technically called a pseudo-random number generator) is a software
routine that generates a random number between 0 and 1 that is used in sampling random distributions. For
example, let us assume that you have determined that a given process delay is uniformly distributed between
10 minutes and 20 minutes. Then every time an entity went through that process, the random number
generator would a generate a number between 0 and 1 and evaluate the uniform distribution formula that
has a minimum of 10 and a maximum of 20.
As an example, let us assume that the generated random number is 0.7312, then the delay would be
10+(0.7312)*(20-10) = 17.312. So the entity would delay for 17.312 time units in the simulation. Everything
that is random in the simulation uses the random number generator as an input to determine values.
20
Pg-22
• System state
– The collection of state variables necessary to described the system at a particular time
• Simulation clock
– A variable giving the current value of simulated time
• Event list
– A list containing the next time when each type of event will occur
• Statistical counters
– Variables used for storing statistical information about system information
• Initialization routine
– A subprogram to initialize the simulation model at time 0
• Timing routine
– A subprogram that determines the next event from the event list and then advances the simulation clock to
the time when that event is to occur
• Report generator
– A subprogram that computes estimates (from the statistical counters) of the desired measures of
performance and produces a report when the simulation ends
• Event routine
– A subprogram that updates the system state when a particular type of event occur
• There is one event routine for each event type
• Library routines
– A set of subprogram used to generate random observations from probability distributions that were
determined as part of the simulation model
• Main program
– A subprogram that invokes the timing routine • determine the next event and
– transfer control to the corresponding event routine • update the system state appropriately
– check for termination
• invoke the report generator when the simulation is over.
What is a statechart?
• It is a visual construct that enables us to define event- and time-driven behavior of various objects.
• Statecharts are very helpful in simulation modeling.
• They are used in agent based models, process and system dynamics models.
• Statecharts consist of states and transitions.
• A state can be a “concentrated history” of the object and also as a set of reactions to external events
that determine the object’s future.
• The reactions in a particular state are defined by transitions .Each transition has a trigger, such as a
message arrival, a condition, or a timeout.
• When a transition is taken (“fired”) the state may change and a new set of reactions may become
active.
• Example : A laptop running on a battery
Simple states
• Drawing a statechart typically starts with drawing states, giving them names, and adjusting their sizes
and colors.
• create a state in the drawing mode:
1. Double click the State icon in the Statechart palette, or double click the pencil icon on the right hand side of
the State icon.
2. Click and hold on the canvas where the upper left point of the state should be placed.
3. Drag the cursor to the bottom right point of the state.
4. Release the mouse button
• The in-place editor of the state name opens automatically just after you create a state.
• To change the state name:
• 1. Double-click the state name in the graphical editor.
• 2. Type the new name in the in-place text editor.
• we can also change names of statechart elements in the properties window.
• To adjust the position of the state name:
• 1. Select the state by clicking it.
• 2. Drag the state name.
Pg-24
Transitions
• the Transition object on the canvas, connect it to states, and edit its shape, or you can draw a
transition point by point in the “drawing mode”.
• To draw a transition point by point (in the drawing mode):
• 1. Double click the Transition icon in the Statechart palette.
• 2. Click in the canvas at the position of the first point of the transition.
• 3. Continue clicking at all but the last points.
• 4. Double click at the last point position.
Composite states
• composite state is a state which contains other states.
• A composite state is created in the same way as a simple state.
• You can drag the State on the canvas and then resize it so that it contains other states, or you can draw
it around other states in the drawing mode.
• When a state becomes composite, its fill color is set to transparent. You can change it by setting the
Fill color field in the state properties.
History state
• History states are used to model a return to the previously interrupted activity.
• History state can be located only inside a composite state.
• There are two types of history states:
Pg-26
• shallow and deep.
• Shallow history state remembers the last visited state on the same level of hierarchy,
• Deep history state remembers the last visited simple state, which may be at a deeper level.
• You can switch between shallow and deep in the history state properties.
Final state
• Final state is a state that terminates the statechart activity.
• Final state may be located anywhere –inside composite states or at the top level.
• There may be any number of final states in a statechart.
• A final state may not have any outgoing transitions.
• In Figure :The final state is inside state1, and upon entering it, the statechart deactivates all
transitions that were active.
State transitions
• the transitions from the current state of the statechart define how the statechart will react to the
external events and conditions.
• we can test whether a state is active by calling the statechart method isStateActive( <state name>).
• we can find out the current simple state by calling getActiveSimpleState().
Timeout expressions
• The expression in the Timeout field of the transitions triggered by timeouts is interpreted as time
interval in model time units.
• If you write 10 there, it may mean 10 seconds, 10 days, 10 weeks, etc,subject to the time unit specified
in the experiment properties.
• To make the timeout expression independent of the model time unit, you should use the functions like
second(), minute(), …, day(),week().
• For example, the expression 10*day() always evaluates to 10 days.
• For timeouts measured in time units larger than a week,you should use the function:
Transitions triggered by messages
• You can send messages to a statechart, and the statechart can react on message arrival.
• A message can be an arbitrary object: a string, a number, an ActiveObject, etc.
• You can set up the transition to check the message type and content.
Pg-27