Professional Documents
Culture Documents
8 - Model Verification and Validation
8 - Model Verification and Validation
C H A P T E R
8 MODEL VERIFICATION
AND VALIDATION
8.1 Introduction
Building a simulation model is much like developing an architectural plan for a
house. A good architect will review the plan and specifications with the client or
owner of the house to ensure that the design meets the client’s expectations. The
architect will also carefully check all dimensions and other specifications shown
on the plan for accuracy. Once the architect is reasonably satisfied that the right
information has been accurately represented, a contractor can be given the plan to
begin building the house. In a similar way the simulation analyst should examine
the validity and correctness of the model before using it to make implementation
decisions.
In this chapter we cover the importance and challenges associated with model
verification and validation. We also present techniques for verifying and validat-
ing models. Balci (1997) provides a taxonomy of more than 77 techniques for
model verification and validation. In this chapter we give only a few of the more
common and practical methods used. The greatest problem with verification and
validation is not one of failing to use the right techniques but failing to use any
technique. Questions addressed in this chapter include the following:
• What are model verification and validation?
• What are obstacles to model verification and validation?
• What techniques are used to verify and validate a model?
• How are model verification and validation maintained?
Two case studies are presented at the end of the chapter showing how verifi-
cation and validation techniques have been used in actual simulation projects.
203
Harrell−Ghosh−Bowden: I. Study Chapters 8. Model Verification and © The McGraw−Hill
Simulation Using Validation Companies, 2004
ProModel, Second Edition
FIGURE 8.1
System
Two-step translation
process to convert a
real-world system to
a simulation model.
n
Concept a tio
lid
Va
Ve
r ific
at
io
n
Model
Harrell−Ghosh−Bowden: I. Study Chapters 8. Model Verification and © The McGraw−Hill
Simulation Using Validation Companies, 2004
ProModel, Second Edition
the tangled mess and figure out what the model creator had in mind become
almost futile. It is especially discouraging when attempting to use a poorly con-
structed model for future experimentation. Trying to figure out what changes need
to be made to model new scenarios becomes difficult if not impossible.
The solution to creating models that ease the difficulty of verification and val-
idation is to first reduce the amount of complexity of the model. Frequently the
most complex models are built by amateurs who do not have sense enough to
know how to abstract system information. They code way too much detail into the
model. Once a model has been simplified as much as possible, it needs to be coded
so it is easily readable and understandable. Using object-oriented techniques such
as encapsulation can help organize model data. The right simulation software can
also help keep model data organized and readable by providing table entries and
intelligent, parameterized constructs rather than requiring lots of low-level pro-
gramming. Finally, model data and logic code should be thoroughly and clearly
documented. This means that every subroutine used should have an explanation
of what it does, where it is invoked, what the parameters represent, and how to
change the subroutine for modifications that may be anticipated.
In the top-down approach, the verification testing begins with the main mod-
ule and moves down gradually to lower modules. At the top level, you are more
interested that the outputs of modules are as expected given the inputs. If discrep-
ancies arise, lower-level code analysis is conducted.
In both approaches sample test data are used to test the program code. The
bottom-up approach typically requires a smaller set of data to begin with. After
exercising the model using test data, the model is stress tested using extreme input
values. With careful selection, the results of the simulation under extreme condi-
tions can be predicted fairly well and compared with the test results.
FIGURE 8.2
Fragment of a trace
listing.
Harrell−Ghosh−Bowden: I. Study Chapters 8. Model Verification and © The McGraw−Hill
Simulation Using Validation Companies, 2004
ProModel, Second Edition
FIGURE 8.3
ProModel debugger
window.
state values as they dynamically change. Like trace messaging, debugging can be
turned on either interactively or programmatically. An example of a debugger
window is shown in Figure 8.3.
Experienced modelers make extensive use of trace and debugging capabili-
ties. Animation and output reports are good for detecting problems in a simula-
tion, but trace and debug messages help uncover why problems occur.
Using trace and debugging features, event occurrences and state variables
can be examined and compared with hand calculations to see if the program is op-
erating as it should. For example, a trace list might show when a particular oper-
ation began and ended. This can be compared against the input operation time to
see if they match.
One type of error that a trace or debugger is useful in diagnosing is gridlock.
This situation is caused when there is a circular dependency where one action de-
pends on another action, which, in turn, is dependent on the first action. You may
have experienced this situation at a busy traffic intersection or when trying to
leave a crowded parking lot after a football game. It leaves you with a sense of
utter helplessness and frustration. An example of gridlock in simulation some-
times occurs when a resource attempts to move a part from one station to another,
but the second station is unavailable because an entity there is waiting for the
same resource. The usual symptom of this situation is that the model appears to
freeze up and no more activity takes place. Meanwhile the simulation clock races
rapidly forward. A trace of events can help detect why the entity at the second sta-
tion is stuck waiting for the resource.
Harrell−Ghosh−Bowden: I. Study Chapters 8. Model Verification and © The McGraw−Hill
Simulation Using Validation Companies, 2004
ProModel, Second Edition
system. The modeler should have at least an intuitive idea of how the
model will react to a given change. It should be obvious, for example, that
doubling the number of resources for a bottleneck operation should
increase, though not necessarily double, throughput.
• Running traces. An entity or sequence of events can be traced through the
model processing logic to see if it follows the behavior that would occur
in the actual system.
• Conducting Turing tests. People who are knowledgeable about the
operations of a system are asked if they can discriminate between system
and model outputs. If they are unable to detect which outputs are the
model outputs and which are the actual system outputs, this is another
piece of evidence to use in favor of the model being valid.
A common method of validating a model of an existing system is to compare
the model performance with that of the actual system. This approach requires that
an as-is simulation be built that corresponds to the current system. This helps
“calibrate” the model so that it can be used for simulating variations of the same
model. After running the as-is simulation, the performance data are compared to
those of the real-world system. If sufficient data are available on a performance
measure of the real system, a statistical test can be applied called the Student’s
t test to determine whether the sampled data sets from both the model and the
actual system come from the same distribution. An F test can be performed to test
the equality of variances of the real system and the simulation model. Some of the
problems associated with statistical comparisons include these:
• Simulation model performance is based on very long periods of time. On
the other hand, real system performances are often based on much shorter
periods and therefore may not be representative of the long-term
statistical average.
• The initial conditions of the real system are usually unknown and are
therefore difficult to replicate in the model.
• The performance of the real system is affected by many factors that may
be excluded from the simulation model, such as abnormal downtimes or
defective materials.
The problem of validation becomes more challenging when the real system
doesn’t exist yet. In this case, the simulation analyst works with one or more
experts who have a good understanding of how the real system should operate.
They team up to see whether the simulation model behaves reasonably in a vari-
ety of situations. Animation is frequently used to watch the behavior of the simu-
lation and spot any nonsensical or unrealistic behavior.
The purpose of validation is to mitigate the risk associated with making de-
cisions based on the model. As in any testing effort, there is an optimum amount
of validation beyond which the return may be less valuable than the investment
(see Figure 8.4). The optimum effort to expend on validation should be based on
minimizing the total cost of the validation effort plus the risk cost associated
Harrell−Ghosh−Bowden: I. Study Chapters 8. Model Verification and © The McGraw−Hill
Simulation Using Validation Companies, 2004
ProModel, Second Edition
Risk cost
Validation effort
informal and intuitive approach to validation, while the HP case study relies on
more formal validation techniques.
modeled). The animation that showed the actual movement of patients and use of
resources was a valuable tool to use during the validation process and increased
the credibility of the model.
The use of a team approach throughout the data-gathering and model
development stages proved invaluable. It both gave the modeler immediate feed-
back regarding incorrect assumptions and boosted the confidence of the team
members in the simulation results. The frequent reviews also allowed the group to
bounce ideas off one another. As the result of conducting a sound and convincing
simulation study, the hospital’s administration was persuaded by the simulation
results and implemented the recommendations in the new construction.
output of the real process. The initial model predicted 25 shifts to complete a spe-
cific sequence of panels, while the process actually took 35 shifts. This led to
further process investigations of interruptions or other processing delays that
weren’t being accounted for in the model. It was discovered that feeder replace-
ment times were underestimated in the model. A discrepancy between the model
and the actual system was also discovered in the utilization of the pick-and-place
machines. After tracking down the causes of these discrepancies and making
appropriate adjustments to the model, the simulation results were closer to the real
process. The challenge at this stage was not to yield to the temptation of making
arbitrary changes to the model just to get the desired results. Then the model would
lose its integrity and become nothing more than a sham for the real system.
The final step was to conduct sensitivity analysis to determine how model
performance was affected in response to changes in model assumptions. By
changing the input in such a way that the impact was somewhat predictable, the
change in simulation results could be compared with intuitive guesses. Any
bizarre results such as a decrease in work in process when increasing the arrival
rate would raise an immediate flag. Input parameters were systematically changed
based on a knowledge of both the process behavior and the model operation.
Knowing just how to stress the model in the right places to test its robustness was
an art in itself. After everyone was satisfied that the model accurately reflected the
actual operation and that it seemed to respond as would be expected to specific
changes in input, the model was ready for experimental use.
8.5 Summary
For a simulation model to be of greatest value, it must be an accurate representa-
tion of the system being modeled. Verification and validation are two activities
that should be performed with simulation models to establish their credibility.
Model verification is basically the process of debugging the model to ensure
that it accurately represents the conceptual model and that the simulation runs cor-
rectly. Verification involves mainly the modeler without the need for much input
from the customer. Verification is not an exact science, although several proven
techniques can be used.
Validating a model begins at the data-gathering stage and may not end until
the system is finally implemented and the actual system can be compared to the
model. Validation involves the customer and other stakeholders who are in a
position to provide informed feedback on whether the model accurately reflects
the real system. Absolute validation is philosophically impossible, although a
high degree of face or functional validity can be established.
While formal methods may not always be used to validate a model, at least
some time devoted to careful review should be given. The secret to validation is
not so much the technique as it is the attitude. One should be as skeptical as can
be tolerated—challenging every input and questioning every output. In the end,
the decision maker should be confident in the simulation results, not because he or
Harrell−Ghosh−Bowden: I. Study Chapters 8. Model Verification and © The McGraw−Hill
Simulation Using Validation Companies, 2004
ProModel, Second Edition
she trusts the software or the modeler, but because he or she trusts the input data
and knows how the model was built.
References
Balci, Osman. “Verification, Validation and Accreditation of Simulation Models.” In Pro-
ceedings of the 1997 Winter Simulation Conference, ed. S. Andradottir, K. J. Healy,
D. H. Withers, and B. L. Nelson, 1997, pp. 135–41.
Banks, Jerry. “Simulation Evolution.” IIE Solutions, November 1998, pp. 26–29.
Banks, Jerry, John Carson, Barry Nelson, and David Nicol. Discrete-Event Simulation, 3rd
ed. Englewood Cliffs, NJ: Prentice-Hall, 1995.
Baxter, Lori K., and Johnson, Eric. “Don’t Implement before You Validate.” Industrial
Engineering, February 1993, pp. 60–62.
Hoover, Stewart, and Ronald Perry. Simulation: A Problem Solving Approach. Reading,
MA: Addison-Wesley, 1990.
Neelamkavil, F. Computer Simulation and Modeling. New York: John Wiley & Sons, 1987.
O’Conner, Kathleen. “The Use of a Computer Simulation Model to Plan an Obstetrical
Renovation Project.” The Fifth Annual PROMODEL Users Conference, Park City,
UT, 1994.
Sargent, Robert G. “Verifying and Validating Simulation Models.” In Proceedings of the
1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson,
and M. S. Manivannan, 1998, pp. 121–30.