Professional Documents
Culture Documents
014-Krigging Interpolation in A Survey
014-Krigging Interpolation in A Survey
Department of Information Systems and Management Department of Information Systems and Management/
Tilburg University Center for Economic Research (CentER)
Postbox 90153, 5000 LE Tilburg Tilburg University
THE NETHERLANDS Postbox 90153, 5000 LE Tilburg
THE NETHERLANDS
ABSTRACT Cressie (1993) for historical details). Kriging has already re-
alized a track record in geostatistics–see the classic textbook,
Many simulation experiments require much computer time, Cressie (1993)–and in deterministic simulation–see the clas-
so they necessitate interpolation for sensitivity analysis and sic paper, Sacks et al. (1989) and the recent textbook, Sant-
optimization. The interpolating functions are ‘metamodels’ ner, Williams, and Notz (2003).
(or ‘response surfaces’) of the underlying simulation mod- Kriging provides exact interpolation, i.e., the predicted
els. Classic methods combine low-order polynomial re- output values at ‘old’ input combinations already observed
gression analysis with fractional factorial designs. Modern are equal to the simulated output values at those inputs
Kriging provides ‘exact’ interpolation, i.e., predicted out- (‘inputs’ are also called ‘factors’; ‘input combinations’ are
put values at inputs already observed equal the simulated also called ‘scenarios’). Obviously, such interpolation is
output values. Such interpolation is attractive in determi- appealing in deterministic simulation. Kriging and deter-
nistic simulation, and is often applied in Computer Aided ministic simulation are often applied in Computer Aided
Engineering. In discrete-event simulation, however, Engineering (CAE) for the (optimal) design of airplanes,
Kriging has just started. Methodologically, a Kriging automobiles, computer chips, computer monitors, etc.; see
metamodel covers the whole experimental area; i.e., it is again Sacks et al. (1989) and–for updates–Meckesheimer
global (not local). Kriging often gives better global predic- et al. (2002) and Simpson et al. (2001).
tions than regression analysis. Technically, Kriging gives However, Kriging for discrete-event simulation has just
more weight to ‘neighboring’ observations. To estimate the started. In 2002, a search of the ‘International Abstracts of
Kriging metamodel, space filling designs are used; for ex- Operations Research’ gave only two hits. A Google search
ample, Latin Hypercube Sampling (LHS). This paper also for the term Kriging in the WSC proceedings gave only
presents novel, customized (application driven) sequential seven hits for 1998 through 2003; for example, Chick (1997)
designs based on cross-validation and bootstrapping. and Barton (1998). Earlier, Barton (1994) also proposed the
application of Kriging to random simulations (such as dis-
1 INTRODUCTION crete-event queueing simulations). Régnière and Sharov
(1999) discuss spatial and temporal output data of a random
In practice, experiments with simulation models (also called simulation model for ecological processes. In this paper, we
computer codes) often require much computer time; for ex- survey our recent research on Kriging in random simulation;
ample, we have heard of a simulation that requires 50 hours details are given by Kleijnen and Van Beers (2004a) and
per run. Consequently, sensitivity analysis and optimization Van Beers and Kleijnen (2003).
require the analysts to interpolate the observed input/output Methodologically speaking, a Kriging interpolator
(I/O) data. The interpolating function is called a metamodel covers the whole experimental area; i.e., it is a global (not
of the underlying simulation model (it is also called a re- local) metamodel. Our research indicates that Kriging may
sponse surface, emulator, compact model, auxiliary model, give better global predictions than low-order polynomial
etc.) Interpolation treats the simulation model as a black box regression metamodels.
model (unlike Perturbation Analysis and Score Function Note: Regression may be attractive when looking for an
methods). Classic interpolation methods use first-order or explanation–not a prediction–of the simulation’s I/O behav-
second-order polynomials; see Kleijnen (2004). In this pa- ior; for example, which inputs are most important; does the
per, however, we survey an alternative type of metamodel, simulation show a behavior that the experts find acceptable
namely Kriging (named after Krige, the South-African min- (also see Kleijnen 1998)? Regression models such as poly-
ing engineer who developed this method in the 1950s; see nomials are useful in the local search for optimization and in
113
Van Beers and Kleijnen
screening for identification of the most important factors closer the input data are, the more positively correlated
among hundreds of factors; again see Kleijnen (2004). their outputs are.
To estimate the parameters (or coefficients) of these This assumption is modeled through a covariance
polynomials, the analysts must run their simulation model process that is second-order stationary; i.e., the means and
for a set of (say) n combinations of the k input values; this variances are constants (say) µ Y2 and σ Y2 , and the covari-
set is called a design (or an n × k design matrix or plan).
ances of the outputs depend only on the ‘distances’ (say) h
The analysts usually apply classic designs such as frac-
between the inputs. In fact, these covariances decrease as
tional factorials (for example, 2k - p designs); see Myers and
these distances. increase.
Montgomery (2002). Kriging, however, estimates its pa-
Kriging may use different types of covariance func-
rameters through space filling designs, which we shall de-
tions. For example, if there is a single input factor (k = 1),
tail in Section 3. The simplest and most popular designs
the exponential covariance function with parameter θ is
use Latin Hypercube Sampling (LHS), which is already
part of spreadsheet add-on software, such as Crystal Ball
and @Risk. We shall summarize our own novel customized cov[Y ( xi ), Y ( x j )] = σ Y2 exp(−θ | xi − x j | ) (2)
sequential design approach (tailored to the actual simula-
tion; i.e., application driven)–first presented for determinis- so the second factor in (2) represents the correlation func-
tic simulation in Kleijnen and Van Beers (2004b), and for tion. Another example is the Gaussian function, which re-
random simulation in Van Beers and Kleijnen (2004). places the absolute value | xi − x j | in (2) by the square
We organize the remainder of our paper as follows.
Section 2 presents Kriging basics (including its stationary ( xi − x j ) 2 (so it becomes smoother). A third example is
covariance process), and compares Kriging with regres- the linear covariance function, which is used in Kleijnen
sion. Section 3 summarizes standard designs for Kriging and Van Beers (2004a, b) and Van Beers and Kleijnen
metamodels, emphasizing LHS designs. Section 4 presents (2003). In two examples, Kleijnen and Van Beers (2004b)
novel designs that are sequential and customized, using ei- find that it does not matter much whether linear or expo-
ther cross-validation or bootstrapping. Section 5 gives con- nential covariance functions are used.
clusions and further research topics. On one hand, the covariance functions tend to zero as
the distance increases; again see (2) for an example. On the
2 BASICS OF ORDINARY KRIGING other hand, at zero distance there is still variation in the
output, namely σ Y2 . In geostatistics this is called the nug-
There are several types of Kriging, but we limit our discus- get effect; for example, when going back to the ‘same’
sion to so-called Ordinary Kriging. Its predictor for the spot, a completely different output (namely, a gold nugget)
‘new’, unobserved (non-simulated) input x n + 1 —denoted may be observed. In deterministic simulation, this variance
is harder to interpret. Den Hertog, Kleijnen, and Siem
by Yˆ (x n + 1 ) —is a weighted linear combination of all the
(2004) experiment with deterministic simulations, and
n ‘old’, already observed outputs xi : demonstrate how a given covariance function with a spe-
cific θ parameter may give particular output patterns; see
n Figure 1. Obviously, this pattern is not white noise, which
Yˆ (x n + 1 ) = ∑ λi ⋅ Y (x i ) = λ / ⋅ Y (1) implies uncorrelated outputs (white noise is assumed by
i =1
classic regression analysis). This figure demonstrates what
it means to model the I/O function of a deterministic simu-
∑i = 1 λi
n
with = 1, λ = (λ1 , … , λ n ) / and Y= lation model by means of a random metamodel, namely,
the random process specified by (1) and (2). More details
(Y ( x1 ), … , Y ( x n )) / ; capital letters denote random vari- follow in Section 3.2.
ables. This Kriging assumes a single output per input com- In practice there are multiple input factors (k > 1). The
bination; in case of multiple outputs, the predictor may be Kriging literature then assumes product form correlation
computed per output. For multivariate Kriging we refer to functions; for example, (2) gives
Wackernagel (2003). (Each input vector has k components;
see Section 1.) k
To quantify the Kriging weights λ in (1), Kriging de- cov[Y ( xi ), Y ( x j )] = σ Y2 ∏ exp(−θ g | xi ; g − x j ; g | ) (3)
g =1
rives the best linear unbiased estimator (BLUE). The bias
criterion implies that the predictor for an input that has al-
ready been observed, equals the observed output (this where the parameters θ g quantify the relative importance
property makes Kriging an exact interpolator). To account of the various inputs, and xi; g denotes the value of input g
for the output (co)variances, Kriging assumes that the
in input combination i.
114
Van Beers and Kleijnen
15
10
Using the co-variances between the ‘old’ outputs and 3 STANDARD DESIGNS FOR KRIGING
the covariances between the old and the new outputs, the
optimal weights in (1) can be derived. We refer to the lit- In the preceding section, we focused on the Kriging analy-
erature–and the corresponding software–for the rather sis. Now we focus on the design of the experiment with a
complicated formula. The result is twofold: simulation model–that is going to be analyzed by Kriging.
(When the analysts are going to analyze their I/O data by
(i) observations closer to the input to be predicted get means of low-order polynomials, then they use designs
more weight (if the new input equals an old input, such as fractional factorial designs.) ‘Design and analysis’
then the corresponding weight becomes one and is a ‘chicken and egg’ problem.
all the other n – 1 weights become zero); In random simulations, we must confront the issue of
(ii) the optimal weights vary with the specific input replication. Even in expensive simulations–which require
combination x n + 1 that is to be predicted (i.e., a much computer time per replicate (or simulation run)–it is
non-sense to obtain a few replicates if the signal/noise is
different ‘new’ input has different neighbors with
then too small (so the analysts better toss a coin rather than
‘old’ outputs that get more weight), whereas re-
develop and run a simulation model). Kleijnen and Van
gression metamodels use fixed estimated parame-
Beers (2004a, Figure 1) demonstrate that–in case of too
ters (say) β̂ . few replicates–the estimated correlation function is very
noisy–and hence the Kriging weights and predictions are
Figure 2 illustrates Kriging versus second-order poly- very inaccurate.
nomial regression in a simulation experiment. More nu- After simulating a reasonable number of replicates
merical illustrations–with constant and heterogeneous vari- (also see Law and Kelton 2000), Kriging is applied to the
ances respectively–are given by Van Beers and Kleijnen average output per input combination. Kleijnen and Van
(2003) and Kleijnen and Van Beers (2004a). In all these beers (2004a) demonstrate that as the number of replicates
examples, Kriging gives better predictions than regression increases, the Kriging predictor’s accuracy also increases.
metamodels. (Regression models, however, did not give better predic-
The literature (e.g., Cressie 1993, p. 122) and software tions; an explanation might be that regression models use
also give–rather complicated–formulas for the predictor the average output per input combination, but–whatever
variance, var (Yˆ (x n + 1 )) . However, the formulas for the the number of replicates is–their average has the same ex-
pected value). Their conclusion is that ordinary Kriging
optimal weights and the corresponding predictor variance works well in case of random simulation–provided repli-
are wrong! They neglect the fact that the covariances must cates are obtained such that the signal/noise is acceptable;
be estimated; i.e., given a function type such as (3), the pa- it is not necessary to take so many replicates that the sam-
rameters θ g must be estimated. (Most publications assume ple averages get a constant variance.
115
Van Beers and Kleijnen
The simplest and most popular designs use LHS; see 4 NOVEL DESIGNS FOR KRIGING
Section 1 and Kleijnen (2004). LHS was invented by
McKay, Beckman, and Conover (1979) for deterministic The classic Kriging variance formula for the prediction error
simulation models. Those authors did not analyze the I/O neglects the random character of the ‘optimal’ weights (see
data by Kriging (but they did assume I/O functions more Section 2). The formula implies zero variances at all the n
complicated than the polynomial models in classic DOE). old input combinations–which is correct, as Kriging implies
LHS offers flexible design sizes n (number of input combi- exact interpolation. (Also see Figure 7, which we discuss be-
nations actually simulated) for any k (number of simulation low.) To correct this formula, we propose three approaches
inputs). LHS proceeds as follows; also see the example for to quantify the uncertainty of the Kriging predictor:
k = 2 factors in Figure 3.
(i) cross-validation for deterministic simulation;
x2 (ii) parametric bootstrapping for deterministic
(i): Scenario i (i = 1, 2, 3, 4)
+1 simulation;
(2) (iii) distribution-free bootstrapping for random
simulation.
(3)
also Mertens 2001). In this way, we estimate the uncer- ‘close’ to the extreme scenarios might be better than the
tainty of the output at input combinations not yet observed. extremes themselves.)
To quantify this uncertainty, we use the jackknifed vari-
ance; see (6) below (for jackknifing in general, again see --- model, O I/O data, × candidate locations,
Meckesheimer et al. and Mertens). 15z predictions yˆ
(−i)
It turns out that customized designs concentrate on
scenarios in sub-areas that have more interesting I/O be- 10
havior. In Figure 4 (further discussed below), we avoid
5
spending much time on the relatively flat part of the I/O
function with two local hills; in Figure 5, we spend most of y0
our simulation time on the ‘explosive’ part of the hyper- 0 2 4 6 8 10
bolic I/O function. -5
15 -15 x
117
Van Beers and Kleijnen
8
for each of the c = 3 candidates. This figure suggests that it 7
is most difficult to predict the output at the candidate point 6
x = 8.33 (see the solid circles or heavy dots).
5
To quantify this prediction uncertainty, we use jack-
4
knifing. First, we calculate the jackknife’s pseudo-value for
candidate j:
3
~
y j ; i = nc × yˆ j
( −0 )
− (nc − 1) × yˆ j
( −i )
(5)
1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
where yˆ j
( −0 )
is the original Kriging prediction for candidate Figure 7: Approximate Variance of Kriging
Prediction Error
input j based on the complete set of observations (zero ob-
servations eliminated: see the superscript -0). Each design gives the Kriging predictors for the 32
From the pseudo-values in (5), we compute the jack- true test values–which gives the EIMSE, defined in (4).
knife variance for candidate j: Our customized sequential designs give substantially better
results; for example, our EIMSE is only 15% of design (i)
n nc
~ 1 1 and 43% of design (ii) in the first example (for more in-
∑ ( ~y j; i − ~y j ) 2 with ~y j = n ~y .
c
s j2 =
nc (nc − 1) i = 1
∑
i =1
j; i (6)
formation see Kleijnen and Van Beers 2004b, Table 2).
c
Note: We focus on sensitivity analysis, not optimiza-
tion. For example, our design selects input values–not only
To select the winning candidate among the c candi-
near the ‘top’–but also near the ‘bottom’ in Figure 4. If we
dates for actual simulation, we find the candidate with the
were searching for a maximum, we would adapt our proce-
maximum jackknife variances in (6).
dure such that it would not collect data near an obvious
Once we have simulated the ‘winning’ candidate, we
minimum. Also see our discussion on further research in
add the new observation to the set of observations. Next,
Section 5.
we choose a new set of candidates with respect to this aug-
mented set.
In the literature, we could not find an appealing stop- 4.2 Parametric Bootstrap for
ping criterion for our sequential design; more research is Deterministic Simulation
needed. In practice, simulation experiments may stop pre-
maturely (e.g., the computer may break down); our proce- To derive an unbiased estimator of the variance of the
dure then still gives useful information! Moreover, in one- Kriging prediction error, Den Hertog et al. (2004) use
shot (non-sequential) designs such as LHS, the users must Monte Carlo sampling from the assumed stationary Gaus-
apriori decide on the sample size. sian process with a type of covariance function specified
Two simple examples with a single input (k = 1) were by (say) equation (3). As parameters for this process, they
displayed in Figures 4 and 5. We saw that our design is in- use the MLE computed from a (pilot) sample of I/O obser-
tuitively appealing–but now we also use a test set to quan- vations. This Monte Carlo experiment is actually a para-
tify its performance. In this test, we compare our design metric bootstrap; see Efron and Tibshirani (1993). They
with two alternative design types of the same size (n = 36): find that the classic formula underestimates the true vari-
ance of the Kriging predictor. Moreover, the true variance
i. A sequential design based on the approximate Kriging function may be less symmetric than the classic Figure 7.
variance formula discussed in Section 2. This design se- Moreover, this bootstrapping demonstrates what it
lects as the next point the input value that maximizes this means to model the I/O function of a deterministic simu-
variance (we do not need to specify candidate points); see lation by means of a random process; again see Figure 1.
Figure 7. (More details will be reported orally at the WSC 2004
The figure illustrates that this approach selects the new conference.)
input farthest away from the old inputs, namely x = 0.5.
This results in a final design that spreads all its points 4.3 Distribution-Free Bootstrap
evenly across the experimental area–so it resembles the for Random Simulation
next design.
In random simulation, the analysts must obtain several rep-
ii. A single-stage LHS design. To reduce the variability in licates in order to get a clear signal/noise picture; again see
our test results, we obtain ten LHS samples–and average Section 3. We propose to resample these replicates–with
our results over these ten designs. replacement; i.e., we propose distribution-free bootstrap-
118
Van Beers and Kleijnen
ping; again see Efron and Tibshirani (1993). This gives the It seems interesting to compare our customized se-
resampled I/O data (say) (xi , Yr* (xi )) where Yr* (x i ) denotes quential designs with the alternatives of Cox and John
(1995), Jin, Chen, and Sudjianto (2002), and Sasena et al.
the r th resampled value for input x i and r =1, …, p where (2002). For example, Jin et al. (2002) also use ‘cross-
p denotes the number of replicates (for simplicity of nota- validation’, but they do not use jackknifing; they do not
tion we assume a constant number of replicates—as is of- use a set of space filling candidate input combinations–our
ten the case when common random numbers are used). empirical results are more promising than their results.
Next, we recompute the estimated Kriging Distinguishing between sensitivity analysis and optimiza-
weights L̂* and the corresponding predictor Yˆ * (x) . Re- tion is also important.
Comparisons of different metamodel types (polyno-
peating this bootstrapping (say) b times gives the bootstrap
mial regression, Kriging, splines, rational functions, etc.)
variance:
remains a challenging problem. For example, Clarke,
b
Griebsch, and Simpson (2003) compare five metamodel
1
vâr(Yˆi* ) = ∑
(b − 1) g = 1
(Yˆi*; g − Yˆi * ) 2 , (7) types. Keys and Rees (2004) present a sequential design
using splines (instead of Kriging). Kleijnen (2004) gives
more references.
which is analogous to (6) (the latter equation is the jackknife Multivariate outputs also deserve more research, since
variance estimator based on cross-validation). The variance realistic simulation models give multiple responses; again
estimator in (7) can be used to select the next scenario to be see Wackernagel (2003).
simulated–analogous to the customized sequential design in
Section 4.1 with (6) now replaced by (7). More details can REFERENCES
be found in Van Beers and Kleijnen (2004) (and will be re-
ported orally at the WSC 2004 conference). Barton, R.R. (1998), Simulation metamodels. In Proceed-
ings of the 1998 Winter Simulation Conference, eds.
5 CONCLUSIONS AND FUTURE RESEARCH D.J. Medeiros, E.F. Watson, J.S. Carson and M.S.
Manivannan, 167-174. Los Alamitos, CA: IEEE
Kriging gives more accurate predictions than regression Computer Society Press.
does, because regression assumes that the prediction errors Barton, R.R. (1994), Metamodeling: a state of the art review.
are white noise whereas Kriging allows errors that are cor- In Proceedings of the 1994 Winter Simulation Confer-
related; i.e., the closer the inputs are, the more positive are ence, eds. J.D. Tew, S. Manivannan, D.A. Sadowski,
the output correlations. Moreover, regression uses a single and A.F. Seila, 237-244. New York: ACM Press.
estimated parameter set for all input values, whereas Chick, S.E. (1997), Bayesian analysis for simulation input
Kriging adapts its parameters (Kriging weights) as the in- and output. In Proceedings of the 1997 Winter Simula-
put to be predicted changes. tion Conference, eds. S. Andradóttir, K. J. Healy, D.
Besides a proper analysis, an appropriate design of the H. Withers, and B. L. Nelson, 253-259. New York:
simulation experiments is essential. We discussed several ACM Press.
alternatives for classic designs for polynomial regression Clarke, S.M., J.H Griebsch, and T.W., Simpson (2003),
Analysis of support vector regression for approxima-
analysis, namely LHS and our own customized sequential
tion of complex engineering analyses. In Proceedings
designs for Kriging analysis.
of DETC ’03, ASME 2003 Design Engineering Tech-
The development of Kriging analysis and designs for
nical Conferences and Computers and Information in
random simulation need much more research. More em-
Engineering Conference, Chicago.
pirical work is required on Kriging in realistic random
Cox, D.D. and S. John (1995), SDO: a statistical method
simulations, to see if the track record in deterministic simu-
for global optimisation. In Proceedings of the
lation may be matched.
ICASE/LaRC Workshop on Multidisciplinary Design
Recently, Van Beers and Kleijnen (2004) used distri-
Optimization, Philadelphia: eds. N.M. Alexadrov and
bution-free bootstrapping, and investigated the effects of
M.Y. Hussaini, Siam, 315-329.
non-constant variances (which occur in queueing simula-
Cressie, N.A.C. (1993). Statistics for Spatial Data, New
tions), common random numbers (which create correla-
York: Wiley.
tions among the simulation outputs), and non-normality
Den Hertog, D., J.P.C. Kleijnen, and A.Y.D. Siem (2004),
(Kriging uses maximum likelihood estimators of the
The correct Kriging variance estimated by bootstrap-
weights, which assumes normality).
ping. Working Paper. Available online via <http://
Future research may investigate alternative pilot-
center.kub.nl/staff/kleijnen/papers.
samples–sizes n0 and components x–and alternative sets of
html> (accessed August 24, 2004).
candidate points, for sequential customized designs. A
Efron B. and R.J. Tibshirani (1993). An introduction to the
good stopping criterion for these designs is also needed.
bootstrap. New York: Chapman & Hall.
119
Van Beers and Kleijnen
Jin, R, W. Chen, and A. Sudjianto (2002), On sequential Myers, R.H. and D.C. Montgomery (2002), Response surface
sampling for global metamodeling in engineering de- methodology: process and product optimization using
sign. In Proceeds of DET ‘02, ASME 2002 Design designed experiments. 2nd. edition. New York: Wiley.
Engineering Technical Conferences and Computers Park, S., J.W. Fowler, G.T. Mackulak, J.B. Keats, and W.M.
and Information in Engineering Conference, Carlyle (2002), D-optimal sequential experiments for
DETC2002/DAC-34092, September 29-October 2, generating a simulation-based cycle time-throughput
2002, Montreal, Canada. curve. Operations Research 50 (6): 981-990.
Keys, A.C. and L.P. Rees (2004), A sequential-design Régnière, J. and A. Sharov (1999), Simulating tempera-
metamodeling strategy for simulation optimization. ture-dependent ecological processes at the sub-
Computers & Operations Research 31: 1911-1932. continental scale: male gypsy moth flight phenology.
Kleijnen (1998), Experimental design for sensitivity analy- International Journal of Biometereology 42: 146-152.
sis, optimization, and validation of simulation models. Sacks, J., W.J. Welch, and T.J. Mitchell, H.P. Wynn
Handbook of Simulation, ed. J. Banks, 173-223, New (1989), Design and analysis of computer experiments.
York: Wiley. Statistical Science 4 (4): 409-435.
Kleijnen (2004), Invited review: An overview of the design Santner, T.J., B.J. Williams, and W.I. Notz (2003), The
and analysis of simulation experiments for sensitivity design and analysis of computer experiments. New
analysis. European Journal of Operational Research York: Springer-Verlag.
(in press). Sasena, M.J, P. Papalambros, and P. Goovaerts (2002),
Kleijnen, J.P.C. S.M. Sanchez, T.W. Lucas, and T.M. Exploration of metamodeling sampling criteria for
Cioppa (2004), A user’s guide to the brave new world constrained global optimization. Engineering Optimi-
of designing simulation experiments. Working Paper. zation 34 (3): 263-278.
Available online via <http://center.kub.nl/ Simpson, T.W., T.M. Mauery, J.J. Korte, and F. Mistree
staff/kleijnen/papers.html/>(accessed (2001), Kriging metamodels for global approximation
August 24, 2004). in simulation-based multidisciplinary design optimiza-
Kleijnen, J.P.C and R.G. Sargent (2000), A methodology tion. AIAA Journal 39 (12): 2233-2241.
for the fitting and validation of metamodels in simula- Van Beers, W.C.M. and J.P.C. Kleijnen (2003), Kriging
tion. European Journal of Operational Research, 120, for interpolation in random simulation. Journal of the
(1): 14-29. Operational Research Society 54: 255-262.
Kleijnen, J.P.C. and W.C.M. van Beers (2004a), Robust- Van Beers, W.C.M. and J.P.C. Kleijnen (2004), Customized
ness of Kriging when interpolating in random simula- sequential designs for random simulation experiments:
tion with heterogeneous variances: some experiments. Kriging metamodeling and bootstrapping. Available
European Journal of Operational Research (in press) online via <http://center.kub.nl/staff/
Kleijnen, J.P.C. and W.C.M. van Beers (2004b), Applica- kleijnen/papers.html/> (accessed August 24,
tion-driven sequential designs for simulation experi- 2004).
ments: Kriging metamodeling. Journal of the Opera- Wackernagel, W. (2003), Multivariate Geostatistics: An
tional Research Society 55: 876-883. Introduction with Applications. 3rd edition. Berlin:
Koehler, J.R. and A.B. Owen (1996), Computer experi- Springer-Verlag.
ments. In Handbook of Statistics, Volume 13, eds. S. Ye, K.Q. (1998), Orthogonal column Latin hypercubes and
Ghosh, C.R. Rao, 261-308. Amsterdam: Elsevier. their application in computer experiments. Journal As-
Law, A.M. and W.D. Kelton (2000), Simulation modeling sociation Statistical Analysis, Theory and Methods
and analysis. 3rd edition. New York: McGraw-Hill. (93): 1430-1439.
Lophaven, S.N., H.B. Nielsen, and J. Sondergaard (2002),
DACE: a Matlab Kriging toolbox, version 2.0.IMM AUTHORS BIOGRAPHIES
Technical University of Denmark, Lyngby.
McKay, M.D., R.J. Beckman, and W.J. Conover (1979), A WIM VAN BEERS is a Ph.D. student in the Department
comparison of three methods for selecting values of of Information Management of Tilburg University. His re-
input variables in the analysis of output from a com- search topic is Kriging in simulation. He will defend his
puter code. Technometrics 21 (2): 239-245 (reprinted thesis in the Spring of 2005. His e-mail address is
in 2000: Technometrics 42 (1): 55-61). <wvbeers@uvt.nl>.
Meckesheimer, M., R.R. Barton, T.W. Simpson, and A.J.
Booker (2002), Computationally inexpensive meta- JACK P.C. KLEIJNEN is a Professor of Simulation and
model assessment strategies. AIAA Journal 40 (10): Information Systems in the Department of Information Man-
2053-2060. agement; he is also a member of the Operations Research
Mertens, B.J.A (2001), Downdating: interdisciplinary re- group of CentER (Center for Economic Research) of Tilburg
search between statistics and computing. Statistica University. His research concerns computer simulation (es-
Neerlandica 55 (3): 358-366. pecially the statistical design and analysis of simulation ex-
120
Van Beers and Kleijnen
121