Computers Ind. Engng

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Computersind. EngngVol. 22, No. 4, pp. 387-395, 1992 0360-8352/92 $5.00+ 0.

00
Printed in Great Britain. All fights reserved Copyright © 1992 Pergamon Press Ltd

SIMULATION OPTIMIZATION USING SIMULATED


ANNEALING

JORGE HADDOCK and John MITTENTHAL


Department of Decision Sciences and Engineering Systems, Reusselaer Polytechnic Institute, Troy,
NY 12180-3590, U.S.A.

(Receivedfor publication 29 April 1992)

Abstract--The purpose of this study is to investigate the feasibility of using a simulated annealing
algorithm in conjunction with a simulation model to find the optimal parameter levels at which to operate
a system. In particular, we discuss an effort to use simulated annealing to find a combination of input
parameter values for a model which optimizes a nonconvex, nonconcave objective function of the input
parameters. In the absence on an optimal annealing schedule, we demonstrate that multiple runs of the
simulated annealing algorithm can result in an optimal or near-optimal solution to the problem.

INTRODUCTION

A typical optimization problem is to maximize a real-valued function F ( X ) where feasible points


X = (x~, x2 . . . . . x~) are restricted to some constraint set X c ~ . If this problem is analytically
intractable, as is often the case in manufacturing or other practical environments, then the analyst
may choose to simulate the system. The optimization of the objective function in a simulation
model is the focus of this paper.
The aim of simulation optimization is to determine the inputs, or parameters, which optimize
the outputs, or performance measures, of the simulation experiment. Note that the values of the
simulation performance measure are actually realizations of a random variable because of the
random nature of the processes described by the inputs. In addition, when an objective function
is defined using these performance measures, it also becomes stochastic in nature. For the purposes
of optimization, the simulation is simply a "black box" which computes a realization of the
(stochastic) function value for a given combination of the parameter values.
Meketon [7] surveys approaches and recent results in simulation optimization. These approaches
include standard non-linear programming techniques, response surface methodologies and stochas-
tic approximation techniques. A drawback of such approaches is that they terminate upon finding
a local optimum. In another survey, Glynn [4] presents a framework of optimization problems and
some of their corresponding solution approaches. The approaches discussed by Meketon are,
according to Glynn, more appropriate for a class of problems characterized by a finite number of
continuous parameters. However, these approaches are apparently not effective when the par-
ameters are discrete, i.e. take on only a countable number of values.
Simulated annealing is an iterative stochastic search method, analogous to the physical annealing
process whereby material is gradually cooled so that a minimal energy state is achieved. This
method generates a sequence of solutions whose function values have a general decreasing tendency
but are not strictly decreasing. Specifically, it is possible for the sequence of function values to
increase occasionally. In this way, the sequence of solutions may leave the area around a local
minimum to locate either a better local minimum or perhaps the global minimum. Of course, this
approach may just as easily be applied to maximization problems. Metropolis et al. [8] first
introduced the simulated annealing concept while Kirkpatrick et al. [5] successfully applied the
approach to deterministic optimization problems. Bulgak et aL [2] have recently demonstrated one
application of simulated annealing to simulation optimization.
The goals of this paper are to demonstrate that simulated annealing is a viable, effective and
efficient approach to solving problems characterized by a finite number of discrete parameters and
to suggest ways to overcome some obstacles inherent to simulated annealing. In the next section,
the simulated annealing procedure and its associated concerns are discussed. In the third section,
an example model of an automated manufacturing system is described. The objective in this
cam 22/4--c 387
388 JOROE HADDOCK and JOHN M1rlmr~TaAl~

problem is to maximize profit. The experimental results are contained in the fourth section and
our concluding comments are in the last section.

SIMULATED ANNEALING ALGORITHM

Simulated annealing is a stochastic optimization method which is particularly effective on


discrete or combinatorial problems. An additional feature is that this approach only requires
a point's corresponding function value, i.e. other functional information is unnecessary.
Hence, simulated annealing ought certainly to be a candidate solution approach for simulation
optimization.
The algorithm begins by randomly choosing a point Xc in the parameter space, evaluating that
point (i.e., running a simulation experiment) and assigning to Vc the corresponding function
evaluation. Subsequently an adjacent feasible point Xa satisfying IIArc- Xa 11= 1 is randomly
selected, then evaluated, and the corresponding function evaluation is assigned to Vo. The point
X~ becomes the current point X~ with probability

i1+exp( )1-1
where exp(y) = e y and T is a parameter describing the current temperature of the annealing process.
The temperature T is then decreased and this iteration is repeated.
The algorithm's behavior is strongly dependent on the existing temperature. In a maximization
problem, the probability of accepting the adjacent point X, is close to ½at high temperatures, thus
allowing acceptance of downhill moves on the way to larger uphill moves. As the temperature
decreases, however, the probability of accepting downhill moves decreases while the probability of
accepting uphill moves increases.
A sequence of non-increasing temperatures {T(j)}~= ~, where T ( j ) is the temperature during the
jth iteration, define an annealing schedule. Geman et al. [3] demonstrate the convergence of the
simulated annealing approach when T ( j ) satisfies
C
T ( j ) >~
log(1 + j )
for some constant C which is independent of j. Hence, to ensure that convergence to a global
optimum occurs, the temperature must be decreased quite slowly.
The major drawback of applying the above annealing schedule is that a slow temperature
decrease results in the evaluation of the objective function at many points. This is undesirable in
a simulation optimization content since each evaluation corresponds to a simulation run and,
hence, is computationally quite expensive. To make fewer evaluations, we employed a more rapid
temperature decreasing (i.e. heuristic) annealing schedule, which Ackley [1], among others, has
used.
This annealing schedule is determined by a set of four parameters, {Ti, TT, r, k } , where
Ti > Ti > 0, r ~ (0,1) and k is some positive integer. Observe that the annealing schedule {T(j)}
generated using the set of parameters {Ti, Ts, r, k } is
T, j = 1,2, . . . , k
T ( j ) = rT~ j = k + 1, . . . 2k

r n-lTi j = ( n - l ) k + l . . . . nk
where n is the smallest integer greater than or equal to

and so satisfies r n Ti <%Ty < r n- 1 Ti.


A pass of the algorithm is initialized at a starting temperature Ti. The current temperature Tnow
is maintained for k iterations and then decreased at a rate r so that T,~w = r. Tnow. The algorithm
stops after the' current temperature passes below a threshold value TI. Another pass of the
i
Simulation optimiTation 389

algorithm may be performed as long as the pass count is less than NO_OF_RESTARTS, a positive
integer denoting the number of times to repeat the simulated annealing algorithm from a new
randomly selected starting point. The complete algorithm is detailed below.
(Iterated) simulated annealing
Parameters: Ti = initial temperature = 100
T/= final temperature = 5 or 10
r= temperature decay rate = 0.4, 0.6 or 0.8
k= number of iterations at a given temperature--3, 4 . . . . , 8
Step O. Set PASS to 0.
Step 1. Set T to Ti. Select a point Xc at random and evaluated itt (Vc). Set PASS = PASS + 1.
Step 2. Begin loop: do j = 1. . . . , k
Pick an adjacent point Xa at random and evaluate itt (Va). Set Xc to Xa with probability

[1
End loop
Step 3. Set T to rT. If T >i T/, go to Step 2. Otherwise, if PASS < NO_OF_RESTARTS, then go
to Step 1; else stop.
Note that if NO_OF_RESTARTS had been set to 1, then only one pass of the algorithm would
have been performed. This would be sufficient if the annealing schedule used was guaranteed to
yield convergence of the simulated annealing algorithm to a global maximum, independent of the
initial point selected. However, we have no such guarantee.
Some of the factors which influence the effectiveness of a single pass of the (iterated) simulated
annealing algorithm are the parameter value settings used to define the annealing schedule, the
initial starting point selected, and the objective function terrain. If the randomly selected initial
starting point is close to the global maximum then the simulated annealing algorithm is likely to
achieve the global maximum regardless of the objective function terrain or the annealing schedule.
However, when the initial starting point is distant from the global maximum, both the annealing
schedule and the objective function terrain play more important roles. For instance, when the
temperature decreases slowly and the objective function terrain is relatively flat, the global
maximum is likely to be achieved although an excessive number of expensive function evaluations
may have been performed. In contrast, when the temperature decreases quickly and the objective
function terrain is relatively hilly, then the solution obtained may not even be a local maximum.
In the former instance, it would have been preferable to use a more quickly decreasing annealing
schedule while, in the latter, we could compensate by either decreasing the temperature more
slowly, performing multiple passes of the simulated annealing algorithm or perhaps a combination
of both. In the following paragraphs, we provide a discussion of the algorithm's performance for
various values of the annealing schedule parameters.
A high initial temperature means there is a probability of approximately ½of accepting downhill
(i.e., inferior) moves at the start of a search while a lower initial temperature results in a lower
probability. Similarly, using a relatively high final temperature implies there is a relatively high
probability of accepting downhill moves at the end of a search while using a low final temperature
results in a low probability of accepting downhill moves and thereby increases the tendency to
locate a local, if not global, maximum at the end of a search.
Temperature decay rates, r, closer to 1 cause the temperature to decrease slowly and thus allow
the algorithm to search a relatively broad area and to accept a relatively large number of downhill
moves. Hence, using such decay rates is, perhaps preferable in steep, hilly terrain where the global
optimum may be several peaks away from the current point. Conversely, use of decay rates closer
to 0 may be appropriate for less hilly terrain where fewer steep downhill moves would be necessary
to explore the space well.

~'Each evaluation is in fact a simulation run using the point X as the combination of input parameters. The evaluated value
is the corresponding steady-state mean of the simulation run.
390 JORGE HADDOCKand JOHN MITTENTICAL

The number of iterations to be performed at each temperature, k, also affects the performance
of the algorithm. As k is increased, more points are evaluated and larger areas of the terrain may
be searched. This enhances the algorithm's chances of locating a global maximum.
When selecting annealing schedule parameter values for a specific application, knowledge of the
function terrain is important. In many instances, the terrain is unknown. Under such circumstances,
use of any set of parameter values could yield a solution which may be sensitive to the initial point.
This problem can be overcome by selecting a value of NO_OF_RESTARTS which is strictly
greater than 1. Each subsequent pass of the algorithm is initialized from a randomly selected
starting point, broadening the range of the overall search, and so reduces the sensitivity of the
results to the initial point selected on the first pass of the algorithm.

MODEL

In this section, we consider the simulation optimization problem of maximizing a total expected
profit of a hypothetical automated manufacturing system with a small number of decision variables.
There are three discrete parameters used to describe the operation of this system. The expected
profit is, by construction, a complicated function which only implicitly depends on the parameters
and has multiple local maxima. Hence, this problem is characterized by a finite number of discrete
parameters and is not amenable to optimization via the approaches surveyed by Meketon. Instead,
we solve this problem using simulated annealing. The automated manufacturing system modeled
in this study consists of four machines used to assemble three similar products and a carrousel used
for transportation between the machines (see Fig. 1). Each machine has both input and output
buffers. Product batches enter into the system's initial inventory area at fixed and regular time
periods. A more detailed description of the system may be found in Manz [6].
The three parameters, i.e. the decision variables of the simulation, describing the variable features
of the system's operation are (1) the size of the arrival batches, (2) the proportion of products within
the arrival batches, and (3) the size of the output buffers at each machine. Each parameter may
take on multiple values. There are six different batch size values, five different batch product-mixes
and four distinct local output buffer configurations.
The batch size may vary from 20 to 45 pallets in increments of 5, where each pallet
contains 10 product items, A maximum batch size of 45 was imposed since the system was
overloaded and did not achieve steady state for a majority of the parameter combinations at a batch
size of 50.

Machine I
1

Machine2 I
Initial I Input h
inventory
Output
4

Legend:
Machine
] = Input buffer 3
0 = Output buffer

Layout of flexible manufacturing system


Fig. 1
Simulation optimization 391

Table 1. Batch product-mix


Product-mix 1 Batches contain only one product.
Batches arrive in the following order: product 2, product 3, product 1
Product-mix 2 Each batch contains approximately 33% of each product.
The products are randomly mixed in each batch.
Product-mix 3 Each batch contains approximately 50% of product 1, 25% of product 2, and 2 5 e of product 3.
The products are randomly mixed in each batch.
Product-mix 4 Each batch contains approximately 50% of product 1, 2 5 e of product 2, and 25% of product 3.
Each batch is ordered so that product 1 is in the front of the batch, product 2 is in the middle, and product 3 is last.
Product-mix 5 Batches contain only one product.
Batches arrive in the following order: product 2, product 1, product 3, product I

Tabk 2. Local output capacities at euchmachine


Configuration 1 Configuration 2 Configuration 3 Configuration 4
Machine 1 1 2 3 3
Machine 2 1 2 3 4
Machine 3 1 I 3 4
Machine 4 1 1 1 1

Five product-mixes were considered for the arrivals, as shown in Table 1, so that each
product-mix shared a common factor with its neighboring product-mix. Specifically, product-mixes
1 and 2, and product-mixes 4 and 5 supply equal quantities of the product to the system over time,
product-mixes 2 and 3 are characterized by a random mix of products within batches, and
product-mixes 3 and 4 supply equal quantities of each product in each batch. Thus, product-mixes
1 and 2 both supply one-third of each product in the long run. Product-mixes 2 and 3 contain
products randomly mixed in each batch. Product-mixes 3 and 4 contain 50% of product 1 and 25
percent of products 2 and 3 in the long run. In both product-mixes 4 and 5 the products are
sequenced in the same order.
The capacities of the local output buffers were varied as shown in Table 2. Four configurations
of local output buffer capacities were considered. The capacities of the buffers were increased since
it was hypothesized that greater output capacity would allow higher utilization of the machines
and thus allow a higher production rate. Higher profits might then be achieved when the batch
sizes are large since the system might achieve steady state.

Table 3. Maximum profit at each parameter configuration (kS per 2000 min)
Batch size
20 25 30 35 40 45
Output buffer configuration 1
Distribution
1 3.7 15.1 67.2 63.0 113.2 87.0
2 3.6 16.1 69.4 69.0 125.0 127.2
3 - 1.4 7.4 61.5 61.0 121.1 123.4
4 -4.4 -- 1.8 50.9 43.1 102.1 93.3
5 -2.4 2.5 57.8 52.8 113.1 101.3
Output buffer configuration 2
Distribution
1 -1.6 10.7 64.0 60.4 113.1 96.9
2 -2.4 10.2 63.8 63.7 120.3 123.1
3 - 7.4 1.5 55.6 55.2 115.4 118.2
4 -9.8 -6.0 48.3 42.2 103.1 95.6
5 -7.9 -2.4 53.8 49.7 110.8 103.0
Output buffer configuration 3
Distribution
1 -13.5 -0.7 53.4 51.1 104.2 90.5
2 -14.4 -!.8 51.8 51.7 108.3 111.2
3 - 19.4 - 10.5 43.6 43.2 103.5 106.4
4 -21.8 - 17.9 37.7 32.9 95.9 90.4
5 - 19.9 - 13.9 42.9 39.4 101.8 95.6
Output buffer configuration 4
Distribution
1 -- 19.5 --6.6 47.9 45.6 99.6 89.8
2 --20.4 --7.8 45.8 45.7 102.4 105.2
3 --25.4 -- 16.5 37.6 37.2 97.5 100.4
4 --27.8 --23.9 31.7 27.2 90.9 86.7
5 --25.9 -- 19.9 37.2 34.5 97.0 93.2
392 JORGE HADDOCK and JOHN MITTENTHAL

The performance measure is total expected profit, averaged over a time interval excluding the
initial transient period. Profit is defined as the total revenue minus the total cost. Revenue is accrued
from the sale of each of the products while costs are incurred from the purchase of raw material,
holding partially completed products in inventory, and assembly costs. This profit function,
represented in Table 3, has multiple local maxima and is therefore representative of many objective
functions in practical situations. Such functions tend to be troublesome for traditional mathemati-
cal programming algorithms.

RESULTS

Each of the 120 combinations of automated manufacturing system parameters was simulated
using the model--developed in the SIMAN simulation language--and the point estimate represent-
ing the expected total profit from each simulation run was placed in a datafile (Table 3). This
datafile was constructed to confirm that the objective function contained multiple local maxima,
i.e. was neither convex nor concave. As can be seen in Table 3, the expected total profit is
nonconvex, nonconcave across batch size and across the product-mix configurations. The global
optimum of 127.2 occurs at a batch size of 45, using product-mix 2 and Ouput Buffer Configuration

(a)
Maximum profit vs. number of iterations
Initial temp = 100, final temp = 10, R = 0.8

140 -- (17) (20) (24) (30) (33) (35)

120 -- °

100 --

o0 . 60- [] Run value


D Q
E 60- Mean profit
E ( ) Mean # of unique simulations
",~ 40 - required for each simulated
(0
annealing run
20 -

0 -

-20 t I I I I I I
3 4 5 6 7 8 9

k = n u m b e r o f i t e r a t i o n s at e a c h t e m p e r a t u r e

(b)
Maximum profit vs. number of iterations
Initial temp = 100, final temp = 5, R = 0.8

(21) (26) (24) (36) (42) (43)


140
.
120

100

~O . so o Run value
0 [] []
E 60 , Mean profit
E ( ) Mean # of unique simulations
"~ 40 required for each simulated
~ 2o annealing run

-20 I I I I I I I
2 3 4 5 6 7 8 9
k = n u m b e r o f i t e r a t i o n s at e a c h t e m p e r a t u r e

Fig. 2
Simulation optimization 393

1. To evaluate the performances of the simulated annealing algorithm, we ran the algorithm under
a number of different annealing schedules. Computer runs were made at three rates of temperature
decrease: 0.8, 0.6, and 0.4. The initial temperature was set at 100 for all of the runs while the final
temperature was set to 10 for half of the runs and 5 for the other half. The number of points, k,
evaluated at each temperature was varied from 3 to 8. Five separate runs were made at each
combination of simulated annealing parameters using five different starting points. The results of
each simulated annealing search are shown in Figs 2-4. (Less than five distinct points may appear
at each iteration level, k, since some of the runs achieved the same maximum point.)
From Geman et al. [3], recall that optimal annealing schedules decreased the temperature very
slowly. Hence, if we hypothesize that an optimal annealing schedule applied to simulation
optimization were to behave in a similar manner, then better solutions should be obtained with
parameter values yielding slower sequences of temperature decrease. Similarly, a sequence which
is longer in length ought to result in better solutions. Therefore, a lower final temperature (T/= 5),
a slower rate of temperature decrease (r = 0.8) and a larger number of iterations performed at each
temperature (k = 7 or 8) should supply annealing schedules yielding better solutions. Figures 2, 3
and 4 support these conclusions.
Further consideration of these figures bring up another point as well. When an inefficient
annealing schedule is used, the solution's quality is sensitive to the selection of an initial point. This

(a)
Maximum profit vs. number of iterations
Initial temp = 100, final temp = 10, R = 0.6

140 - - (11 ) (13) (12) (17) (21) (21)

120 - -
8 []
[]
B

100 - -

~. 8 0 -
[] [] [] Run value
[]
F:: 60 -- * Mean profit
E ( ) Mean # of unique simulations
'~ 40 --
required for each simulated
=E 20 -- annealing run

0 -- []

-20 I I I I I 1
3 4 5 6 7 8

k = n u m b e r o f i t e r a t i o n s a t e a c h temperature

(b)
Maximum profit vs. number of iterations
Initial temp = 100, final temp = 5, R = 0.6

(12) (13) (15) (19) (19) (22)


140 - -

120 - -
°
100 - -

80 - -
[] Run value
E 60-- . Mean profit
E ( ) Mean # of unique simulations
40 - - required for each simulated
annealing run
20 - -

0 --

-20 I I I I I I
2 3 4 5 6 7 8

k = n u m b e r of iterations at e a c h temperature

Fig. 3
394 JORGEHADDOCKand JOHNM1TTENTHAL

is an undesirable trait and may be avoided with the use of a near-optimal annealing schedule. For
example in Fig. 2b with k = 7 or 8, observe that each run consistently achieved the global
maximum. A more expensive approach, from a computational viewpoint, to dealing with an
inefficient annealing schedule is to restart the simulated annealing algorithm a number of times and
choose the best solution from among those obtained. As can be seen from Figs 2-4, this approach
would result with a solution achieving a value which is optimal or near-optimal for almost every
set of annealing parameters. For example, in Fig. 4b, for all values of k except k = 3, the maximum
objective value achieved over 5 passes of the simulated annealing algorithm was the global
maximum.
The annealing schedules yielding slower temperature decreases and longer temperature sequences
are generally more consistent at locating the global optimum. Specifically, the schedules arising
from the parameter values r = 0.8, Ty= 5 or 10, and k = 7 or 8, in Fig. 2, appear to result in the
highest likelihood of finding the maximum profit with a reasonable number of function evaluations.
Approximately 40 unique points (each requiring a separate simulation run) are evaluated with these
parameters. Thus, approximately one-third of the output space is searched in order to consistently
find the global maximum.

(a)
Maximum profit vs. number of iterations
Initial temp = 100, final temp = 10, R = 0.4

140 -- (6) (8) (10) (13) (15) (13)

12o - [] B [] 8 ~ o
1O0 -- On [] *~,,

[]
E [] Run value
60-- F] B o
E * Mean profit
X 40 --
( ) Mean # of unique simulations
20-- []
required for each simulated
[] [] annealing run
[]
0-- []

-20 I 1 I I I I I
2 3 4 5 6 7 8 9
k = number of iterations at each temperature

(b)
Maximum profit vs. number of iterations
Initial temp = 100, final temp = 5, R = 0.4

140 -- (8) (12) (13) (15) (17) (21)


[] []
12o -

too

80 [] / o o []
E o Run value
60 o/ ° [] [] * Mean profit
E
4O ( ) Mean # of unique simulations
required for each simulated
20 -- annealing run
0 --

-2( I 1 I L I I
2 3 4 5 6 7 8 9
k = n u m b e r of i t e r a t i o n s at e a c h t e m p e r a t u r e

Fig. 4
Simulation optimization 395

SUMMARY

A simulated annealing algorithm was used in conjunction with a simulation model to determine
the combination o f input parameter values which optimize a nonconvex, nonconcave objective
function of the input parameters o f an automated manufacturing system. A S I M A N simulation
model was developed to evaluate system profit as a function o f the levels o f three p a r a m e t e r s - -
batch size o f arriving products, distribution of products in the batches, and machine output buffer
size. Simulated annealing parameters were then chosen so that the simulated annealing program
would consistently find the global o p t i m u m after evaluating approximately 30% of the 120 input
variable combinations.
Determining an annealing schedule which will produce a near-optimal solution when little or no
information of the objective function terrain is known is difficult. In general, the investigator cannot
afford to reduce the temperature too slowly and reducing the temperature too quickly m a y not be
effective. Hence, some experimentation m a y be required to select suitable simulated annealing
parameters. The simulated annealing algorithm would therefore appear to be most attractive for
applications where a number of similarly landscaped model variations will be investigated. Once
parameter values have been determined for one model, the resulting annealing schedule could be
used for the other models.
Recall that this experiment considered a relatively small number of decision variables. When a
larger number of decision variables are present, the effectiveness of simulated annealing is not
known but can be determined with further experimentation. Regardless of the number of decision
variables involved, the large computational costs of simulation optimization warrant the use of
techniques, such as frequency domain methods [9], to assess the relative importance of the input
parameters and exclude the unimportant variables from further consideration.
In conclusion, we believe that simulated annealing is a promising approach to the problem of
simulation optimization.

Acknowledgement--This work has been partially supported by a grant from New York State's Center for Advanced
Technology.

REFERENCES

1. D. H. Ackley. An empirical study of bit vector function optimization. In Genetic Algorithms and Simulated Annealing
(Edited by Lawrence Davies). Morgan Kaufmann, Los Altos, CA (1987).
2. A. A. Bulgak and J. L. Sanders. Integrating a modified simulated annealing algorithm with the simulation of a
manufacturing system to optimize buffer sizes in automatic assembly systems. Proc. 1988 Winter Simulation Conf.
(Edited by M. Abrams, P. Haigh and J. Comfort), pp. 684-690 (1988).
3. S. Gcrnan and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE
Trans. Pattern Anal. Machine lnteU. PAMI-6, 721-741 (1984).
4. P. W. Glynn. Optimization of stochastic systems. Proc. 1986 Winter Simulation Conf. (Edited by J. R. Wilson, J. O.
Henriksen and S. D. Roberts), pp. 52-59 (1986).
5. S. Kirkpatrick, C. D. Gclatt, Jr and M. P. Vecchi. Optimization by simulated annealing. Science 221, 671-680 (1983).
6. E. M. Manz. Optimization of a flexible manufacturing system simulation model using simulated annealing. Master's
Thesis, Department of Decision Sciences and Engineering Systems, Rcnsselaer Polytechnic Institute, Troy, New York
(1989).
7. M. S. Meketon. Optimization in simulation: a survey of recent results. Proc. 1987 Winter Simulation Conf. (Edited by
A. Thesen, H. Grant and W. David Kelton), pp. 58-67 (1987).
8. N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller and E. Teller. Equation of state calculations by fast computing
machines. Journal of Chemical Physics 21, 1087-1092 (1953).
9. L. W. Schruben. Simulation optimization using frequencydomain methods. Proc. 1986 Winter Simulation Conf. (Edited
by J. R. Wilson, J. O. Henriksen and S. D. Roberts), pp. 366-369 (1986).

You might also like