Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Nine Steps to A Successful Simulation Study

Author: Cliff King

1. 2. 3. 4. 5. 6. 7. 8. 9.

Define the problem Formulate an objective Describe the system and list any assumptions List possible alternative solutions Collect data and gather information Build the computer model Verify and validate the model Run alternative experiments Analyze outputs

These nine steps are briefly defined below. It is not intended to be a comprehensive discussion, but merely a general guide. Remember that a simulation study is not a simple sequence of steps. Some projects may require going back to previous steps as more insight into the system is obtained. The steps of verification and validation will be part of each step of the project. 1. Define the problem A model that represents all aspects of reality for your whole system is impossible or, at best, too expensive. Besides, such a model is often a bad one. It will be far too complex and hard to understand. Therefore, it is advisable to first define a problem, formulate an objective and then build a model that is 100% designed to solving the problem. Care must be taken to not make an erroneous assumption when defining the problem. For instance, rather than state that there are not enough receiving docks, state that truck waiting time is too long. As a guideline, formulate a problem statement as generally as possible, think of possible causes for the problem and then, if possible, define the problem more specifically. 2. Formulate an objective and define the system performance measures A simulation study without an objective is useless. The objective is meant to be a guide through each step of the project. The description of the system is defined with the objective in mind. The objective determines what assumptions can be made. What information and data needs to be collected depends upon the objective. The model is built and validated to specifically meet the objective. And of course, the output results collected are done so with the purpose of satisfying the objectives. The objective needs to be clear, unambiguous, and feasible. Objectives can often be expressed as questions such as Is it more profitable to increase capacity by adding machinery or by working overtime? When defining the objective, it is necessary to specify the performance measures that will be used to determine if the objective is met. Hourly production rate, operator utilization, average queuing times, and max queue size are typical performance measures.

Finally, list any preconditions for the simulation results. For example, the objective must be realized using the existing facility, or the maximum investment amount must not be exceeded, or the product lead-time can not increase. 3. Describe the model and list any assumptions Simply stated, a simulation model captures the time it takes to do things. The times in a system are split up between process times, transportation times, and queuing times. Whether the model is a logistics system, a manufacturing plant, or a service operation, it is necessary to clearly define the following modeling elements: resources, flow items (products, customers or information), routings, item transformations, flow control, process times, and resource down times. Heres a brief description of each. There are four basic types of resources: processors, queues, transports, and shared resources such as operators. The arrival and preloading requirements of the flow items must be defined in terms of arrival times, arrival patterns and types of items. When defining flow routes, detailed descriptions are required for merges and diverts. Item transformations include attribute changes, assembly operations (items combining), and disassembly operations (items splitting). Often there will be the need for controlling the flow of items in the model. For instance, an item may be forced to stop until a condition or time is met, and then released per a specific set of rules. All process times must be defined; clearly listing which times are operator dependant and which are automatic. Resources may have planned and unplanned down times. Planned down times are typically lunches, breaks, or preventative maintenance. Unplanned down times are break down that occur randomly; therefore, a mean time between failure (mtbf) and a mean time to repair (mttr) must be defined. When all is said and done, translating reality into a model description is far more difficult than translating the model description into a computer model. Translating reality into a model always means that you are giving an interpretation of reality. It will be necessary to specify any and all assumptions that are made in the translation. In fact, it is a good idea to keep a list of assumptions readily available during the entire simulation study, as the list will tend to grow. If this step of describing the system is done right, the step of building the computer model will be greatly simplified. Remember that it is only necessary for the model to contain enough detail to capture the essence of the system for the purposes for which the model is intended; it is not necessary to have a one-to-one relationship between elements of the model and elements of the system. As Einstein said: Keep it as simple as possible, but not simpler. 4. List possible alternative solutions It is important to determine the alternative scenarios that the model will have to run early on in the simulation study. It will have an influence on how the model is built. By taking alternatives into account at an early stage, the model can be set up in such a way that it is easily transformed into an alternative system.

5. Collect data and gather information In addition to gathering data and information for the input parameters to the model, it is helpful when validating a model to have real data to compare the performance measures of the model with. It is recommended that you first gather data from historical records, experience, or by calculation. This rough data will provide a basis for establishing the model's input parameters and will help identify those input parameters requiring more precise data collection. Existing sources of data are not always available, and data collection through measurements can be both expensive and time consuming. Rather than go out and start collecting data on every input parameter of the system, it will be more cost effective to use estimates until a sensitivity analyses can be performed on the model to pinpoint those parameters requiring reliable data. Estimates can be obtained from a few quick measurements or by consulting with system experts who have hands-on experience or good familiarity with the system. Even when using rough data, it is best to at least define a triangular distribution based on a minimum, maximum, and most likely value, rather than simply use an average value. Sometimes estimates may be sufficient to meet the objective of the simulation study. As an example, the simulation may simply be used to educate personnel on certain cause-and-effect relationships within the system. In this case, an estimate is all that is needed. When reliable data is needed, it is necessary to collect a statistically significant amount of data over a representative amount of time in order to define a probability distribution that accurately represents reality. The number of data points required is dependent on the variance, but as a rule of thumb it is necessary to have at least thirty and more likely hundreds. If the input parameter is a stochastic down time, then it may be necessary to collect data over a large period of time in order to capture a significant number of points. 6. Build the computer model When building the model, keep the objective in mind. Build small test models to prove out any difficult-to-model parts first. Always build the model in phases, getting each phase to work properly before proceeding to the next phase. Never layout the entire system, and then hit the run button. Run and debug each phase as you go. You possibly will want to make several models of the same system, each with different abstraction levels. Abstract models will help define the important parts of the system and direct the data collection effort for subsequent models with more detail.

7. Verify and validate the model Verification is determining if the model functions as intended. Does the model coincide with the model you wanted to build? Are the products being processed for the correct amount of time, are they going where theyre supposed to go? Validation is more extensive. It involves determining if the model is a correct representation of reality, and determining how much confidence can be placed in the results of the model. Verification There are a number of techniques that can be used to verify a simulation model. The first, and most valuable, is to view the animation and simulation clock simultaneously while running the model in slow speed. This should point out any gross discrepancies in flow routes and processing times. Another verification technique is to query the states and attributes of the resources and flow items in the model through the use of the interactive command window, or by displaying dynamic charts and graphs on the display screen while the model is running. Running the model in step mode and viewing the trace file dynamically can help in debugging the model. It is a good idea to run the simulation under a variety of settings for the input parameters and check to see that the output is reasonable. In some cases, certain simple measures of performance may be calculated by hand and used for a direct comparison. Utilization and production rates are usually easy to calculate for defined areas in the model. When debugging a problem in the model, it is recommended that the same random number stream be used for each trial run so that observed changes are correctly attributed to the modifications made to the model, and not to a change in the random number stream. Sometimes it is helpful to run the model under simplifying assumptions for which the performance can be easily predicted or computed. Validation Model validation establishes credibility in the model. However, there are no validation techniques that will give 100% certainty in the results of a model. We can never prove that the behavior of the model is an exact description of reality. If we could, there might be no need for a simulation model in the first place. At most, we can strive to make sure that the behavior of the model is not contradicted by the facts. With validation, we try to determine the extent to which the model is good. A model is good if it meets our objective by providing relatively accurate information. The model should only be as valid as necessary and not as valid as possible. There is always a trade off between the accuracy of the results and the cost of obtaining them.

Here are a number of ideas on judging the validity of a model: a) Do the model performance measures match the actual system performance measures? b) If there is not an actual system to compare to, then make comparisons to relevant results from similar simulation models of actual systems. c) Use the experience or intuition of system experts to hypothesize how specific components of a complex system should perform. d) Perform a structured walk-through of the model before an audience of all key people to ensure that the model's inputs and assumptions are correct, and that the model performance measures are realistic. The knowledge of the complete team contributes to the validity of the model. e) Does the behavior of the model correspond with theory? Determine the theoretical minimum and maximum of the results and check if the results fall within this interval. f) Vary the input parameters that you know the direction of their effect on a particular performance measure, and verify compliance. g) Is the model capable of accurately predicting results? This technique is used for the continual validation of a model that is used on an ongoing basis. h) Have a fellow simulation modeler review the model. Better yet, have another modeler build a model of the same system and compare results. 8. Run alternative experiments Multiple simulation runs (or observations) are always required when stochastics are involved. Remember, random in random out. Confidence intervals should be calculated for each of the performance measures defined in step 2 if possible. The alternative scenarios can be set up individually and simulated manually using the experiment module of Flexsim, or automatic runs can be executed using the optimization module. The optimization module of Flexsim uses the OptQuest software developed by OptTek Systems, Inc. To conduct an optimization, you will need to define an objective variable to be maximized or minimized, as many decision variables as you would like to experiment with, any requirements that need to be met, and any linear constraints you want satisfied. Then specify the desired confidence interval for the objective variable, and let OptQuest take charge of running the model the correct number of replications to meet the confidence interval for each experiment; and then ultimately finding the optimum solution set of decision variables for maximizing or minimizing your objective variable. OptQuest uses meta-heuristics (a family of optimization approaches that includes genetic algorithms, simulated annealing, tabu search, scatter search and other hybrids) to work towards an optimum solution. When choosing the run length of the simulation, it is important to consider warm-up periods, possible long times between resource failures, daily or seasonal variances in

process times or arrivals, or any other system characteristics that would require a long run length in order to capture the effect. 9. Analyze outputs Reports, charts, graphs, and confidence interval plots will all be part of output analysis. A confidence interval indicates the range in which the performance measure lies. This is done by using an upper and lower limit. The degree in which the upper and lower limits are separated is called the accuracy. The reliability of the range is indicated with a percentage. Statistical techniques are used to analyze the output data from each of the alternative scenario runs. When analyzing results and drawing conclusions, you should be sure to interpret the results in such a way that they relate to the objective. It is often helpful to generate a matrix of results and alternatives.

You might also like