Professional Documents
Culture Documents
A Monte Carlo Primer A Practical Approach To Radiation Transport - Compress
A Monte Carlo Primer A Practical Approach To Radiation Transport - Compress
Stephen A. Dupree
and
Stanley K. Fraley
Sandia National Laboratories
Albuquerque, New Mexico
vii
viii Preface
Stephen A. Dupree
Stanley K. Fraley
Albuquerque, NM
July 2001
Contents
Chapter 1. Introduction
ix
x Contents
3.1 Introduction 57
3.2 Neutron Interactions and Mean Free Path 57
3.3 Neutron Transport 59
3.4 A Mathematical Basis for Monte Carlo Neutron Transport 62
3.5 Monte Carlo Modeling of Neutron Motion 65
Example 3.1 Monoenergetic point source with
isotropic scattering 67
Example 3.2 Self-attenuation in a spherical source
of gamma rays 75
Example 3.3 Beam of neutrons onto a slab shield 77
3.6 Particle Flight Path in Complex Geometries 80
Example 3.4 Mean distance to the next collision 81
3.7 Multi-Region Problems 83
Example 3.5 Two-region slab with a void 83
Exercises 87
Appendix
Random Number Generators 317
Bibliography 329
Index 335
Chapter 1
Introduction
The Monte Carlo method can be used to solve a wide range of physical
and mathematical problems. Its utility has increased with the general
availability of fast computing machines, and new applications are
continually forthcoming. However, the basic concepts of Monte Carlo are
both simple and straightforward, and can be learned by using a personal
computer. In this book we will use such a computer as the basis for
developing and explaining the fundamental concepts of Monte Carlo as
applied to neutral particle transport. As each topic is addressed a
corresponding set of software instructions will be developed. The software
that results will be assembled into a program configuration that is
representative of a full-scale Monte Carlo radiation transport program. The
components of the program will be explained and combined in a fashion that
will allow the reader to understand the function and contribution of each to
the final, and sometimes daunting, whole.
The Monte Carlo method is a technique of numerical analysis that is
based on the use of sequences of random numbers to obtain sample values
for the problem variables. The calculational process used in Monte Carlo is
an artificial construct, usually a computer program that is mathematically
equivalent to the problem being analyzed. Sample values for the problem
variables are obtained by selecting specific numbers from appropriate ranges
for the variables in the problem using probability distributions for such
variables. The desired solution can be obtained, along with estimates of
uncertainties in the solution, by analyzing the results from the sample
values. The sample evaluation in a Monte Carlo calculation is somewhat
equivalent to conducting an experiment. Both an experiment and a Monte
2 Chapter 1
p=lI1t (1.1)
If the needle is tossed randomly onto the plane of parallel lines n times, and
it is found by observation to intersect a line k times, then
p~k/n (1.2)
By the weak law of large numbers, this expression becomes exact in the
limit of large n,
The basis for this result is as follows. If parallel lines are spaced a
distance d apart, and a needle of length A;S; d is tossed randomly among
them, then the angle between the needle and the lines is random over [O,1t].
The position of the center of the needle, "0, is randomly distributed over the
interval [O,d] in a coordinate x that is perpendicular to the parallel lines, as
measured from the nearest "bottom" line (see Figure 1.1.)
J. Introduction 3
Needle
r
Coordinate
X Parallel
Lines
d
The projected length of the needle onto the coordinate x is A. sin a, where
a is the angle between the needle and the parallel lines. If Xc < 0.5 A. sin a,
or if Xc > d - 0.5 A. sin a, the needle will intersect a line. Thus two distinct
regions in the space ofa and x can be defined as shown in Figure 1.2: the
region 0 in which the needle does not touch a line, and the region L in
which it does touch a line. The probability p that the needle will touch a
line is the ratio of the area of L to the total area 0 + L = 1td. Thus
p =_L_ = 2.
ltJA. sina da = 2A. (1.4)
D+L 1td 02 1td
If the needle is half as long as the lines are apart, A. = dl2 and, by equation
(eqn) 1.4, p = lI1t.
a
ltt-:--------::::~
random variables, and the coining of the name "Monte Carlo," was begun at
Los Alamos during the Manhattan Project of World War II by John von
Neumann and Stanley Ulam. The Los Alamos researchers introduced the
use of variance reduction, particularly the techniques known as Russian
roulette and splitting,s to increase the efficiency with which random
variables can be evaluated and solutions obtained.
The original Monte Carlo calculations performed at Los Alamos were
executed using slide rules and mechanical calculators, with the "random"
numbers being obtained by several different methods. Availability and
improvement of computers following World War II greatly extended the use
of the Manhattan Project Monte Carlo technique. The first set of random
numbers6 and the first lengthy monograph on Monte Carlo methods 7 were
published in the mid-1950s. Since that time the Monte Carlo method has
been extended to numerous areas of science and technology and has been
used as an analysis or engineering design tool in many fields of research.
Today it is a standard mathematical tool applied to complex problems not
tractable by other methods and is the method of choice in certain
applications.
calculations typically involving the use of low order bits in digital computer
words. The number strings produced by such mathematical algorithms are
called pseudorandom because the string of numbers they produce can be
reproduced at will. However, the sequence of a good quality random
number generator will exhibit a reasonable degree of randomness.
Randomness in this application means that sequential numbers are
uncorrelated and, in the limit as many numbers are selected, the density of
numbers is uniformly distributed over some interval.
To reiterate, random processes can be evaluated by modeling the
physical process involved or by solving equations that describe the process.
The direct, or analog method of solution, in which the physical process itself
is modeled, is usually the easiest to understand since the steps involved in
the solution follow the steps in the physical process. The solution of
mathematical models is the more flexible and useful application, however,
since such models are not constrained to physical processes. Mathematical
Monte Carlo also lends itself readily to the incorporation of variance
reduction methods. Such methods can greatly reduce the time required to
achieve a result of satisfactory accuracy in a Monte Carlo calculation.
Our approach will be to simulate the child walking the balance beam and
then count the number of successful walks as a fraction of the total number
of trials. By repeating this simulation a large number of times an estimate
can be obtained of both the average number of times out of ten tries the
child will succeed and the distribution of the number of successes about this
average. The latter estimate is particularly important since without it one is
unable to evaluate the precision of the answer. When evaluating a random
variable, often the only method available for estimating the variance of the
mean is to obtain mUltiple estimates, compute the average, and then estimate
the variance of this average based on the distribution of the estimates. It
should be noted that a Monte Carlo estimate of a value without an
associated uncertainty is essentially meaningless. It is similar to a statement
that, "People weigh 157.3 pounds, because that is what I measured when I
weighed somebody once."
The Monte Carlo method of solution assumes there is an unlimited
supply of random numbers available in the range (0,1). These random
numbers will be used one-by-one to decide whether an event, which has a
certain probability of occurring, will be evaluated as having occurred. For
example, if an event has a 0.1 chance of occurring under a particular
circumstance, then a random number that is selected from the interval (0,1)
and has a value less than 0.1 could indicate the event occurred, and a
random number (selected from the same interval) greater than or equal to
0.1 could indicate that the event did not occur.
In a formal sense, the question of whether to include the single point on
the boundary between domains in a numerical space for a random number in
the "lower" of "upper" portion of the space is irrelevant. That is, in the
above example, inclusion of the point for which the random number equals
0.1 in the domain representing an event, instead of the absence of the event,
makes no difference in the result. Because there are an unlimited number of
points on a line between any two values of the coordinate along the line -
i.e., because the number of numerical values between any two different
numbers is unbounded - the probability of selecting a particular point, or
numerical value, such as 0.1, from the continuum of values available to a
truly random number, is zero.
However, in practice, the word length of a computer is finite. Therefore
the probability of selecting a particular value from the available set of
numbers between two limits is not zero. Nevertheless, good quality random
number generators use a high order of precision (at least 32 bits) and the
probability of selecting one particular value is small. In this case the
definition of domain boundaries is of little consequence. On the other hand,
when a small number of significant digits is used for the random number
sequence, as will be the case in the present example, it becomes important
that the correct end of the range of the variable be included in the definition
8 Chapter 1
(l.5)
10 Chapter J
That is,
(1.6)
since the child cannot fall on the sixth step, there being no sixth step. One
would thus expect about 16 successes in 27 tries whereas our short Monte
Carlo calculation produced 13 successes in 27 tries.
It is clear that this Monte Carlo estimate contains some degree of error.
If our result were normally distributed, a concept we will discuss in Chapter
2, the standard deviation of our estimate would be the square root of the
number of successes and we would say that the average number of successes
per 27 tries is 13 ± 3.6. Normalizing to ten trials, as specified in the problem
definition, we would estimate the number of successes per ten trials to be
4.8 ± 1.3. From eqn 1.7 the exact value is slightly larger than 5.9, which is
within our estimated standard error.
There is more information in the results shown in Table 1.2 than this
simple average. For example, we have coincidentally produced an estimate
of the probability of the child falling after zero through four steps, as well as
the probability of completing the walking of the beam. To obtain these
1. Introduction 11
estimates we need merely count the number of times each result was
obtained in the simulation. Access to these results was a simple matter of
recording the relevant data as the calculation proceeded. In a computer
calculation one may either record the entire sequence of events in the
simulation, as was done in Table 1.2, record some relevant portion of that
sequence, or record just the event data one wishes to use. In the latter case
the entire sequence of events would have to be recalculated to score
alternative results. Planning for the scoring is desirable in Monte Carlo and
will be discussed in Chapter 7.
Table 1.2 shows that the child fell off the beam on the second step three
times during the 27 trials conducted. This gives a probability per trial of
0.11 ± .06 of falling on the second step. From eqn 1.6 we see the correct
probability is 0.09. Because of the small number of times this result will
occur, relative to the successful walking of the beam (which has a
probability of 0.59), many more simulations of the type considered here
would be required to improve the accuracy of the Monte Carlo estimate for
the probability of falling on the second step than are included in Table 1.2.
However, if one were interested in obtaining an accurate estimate of this
quantity while running as few simulations as possible, one could attempt to
do so using variance reduction, as will be discussed in Chapter 6.
Consider a circle of unit radius with its center at the origin. We wish to
estimate the area within the portion of the circle that is in the positive
quadrant; i.e., the area of the portion of the circle in the quarter space in
which both x and yare positive. In a sense this problem is similar to
Buffon's Monte Carlo determination of lI1t since this area is, of course,
proportional to 1t. Thus this result will provide us with an additional method
to estimate 1t.
A Monte Carlo solution for the area can be made using the following
process. A pair of random numbers, each distributed uniformly over the
range (0,1), is used to select a random point in the square defined by 0 ~ x ~
1 and 0 ~ y ~ 1. The point so selected is then examined to see whether it is
inside or outside the unit circle centered at the origin. If the point is outside
the unit circle it is rejected; i.e., it is not included in the tally of points found
to be inside the circle. If the point is inside the circle the result is included
in the tally. This process is repeated many times to obtain a number of
points inside the circle. By comparing this tally with the total number of
points evaluated one can obtain an estimate of the probability of a point in
the unit square, selected randomly, being inside the circle. That is, the ratio
of the number of points not rejected, which are those inside the circle, to the
12 Chapter 1
total number of trials is equal to the ratio of the area of the quadrant of a unit
circle (1t/4) to the area of the unit square (1.0). Thus the ratio is an estimate
of the quantity 1t/4. This method of solution is known as the rejection
technique.
This problem involves more than the mere selection and counting of
random numbers as was done in Example 1.1. Here we must compare the
location of the point selected with the arc of the unit circle to determine
whether the point is inside or outside the circle. Thus, after we select the
point (x,y), we must find whether x2 + y2 ~ 1. This would be tedious to
calculate by hand but is simple to program on a computer. A sample Fortran
program to determine the ratio of points inside the circle to total points
selected is shown in Table 1.3.
The function 'fltm,' used in the program of Table 1.3, returns a
pseudorandom number in the range (0,1). The starting point in the random
number string could be set by the subroutine 'mdin.' The argument of
'mdin' must be an integer from one through 231_2. The purpose of using
'mdin' would be to enable the user to repeat a sequence of pseudorandom
numbers should this be desired in order to check or debug a calculation, or
to resume a calculation without repeating any part of the random number
string already used. These concepts, along with the operation of the random
number generator and the routines associated with it, are described in the
Appendix. The user may substitute another random number generator, such
as the one supplied by the Fortran compiler being used, but should be aware
that such generators may be of poor quality. Finally, the random number
generator is not shown explicitly in the Fortran examples in this book, and
the appropriate routines from the Appendix, or elsewhere, must be linked to
the executable files.
The program shown in Table 1.3 consists essentially of a 'do' loop that
retrieves two sequential random numbers, 'x' and 'y,' and tests whether the
resulting point (x,y) is inside the unit circle. If so a score is added to the
running tally, 'npi,' and the square of the score is added to a second tally
called 'sumsq.' The latter, as we shall see, enables us to estimate the
uncertainty in the result. As given, the program uses 109 points and
calculates the expected value of 1t, as well as the standard deviation of the
expected value, on the basis of these points.
The results of executing the program in Table 1.3 are shown in Table
1.4. This table provides the results for several different sample sizes. The
approach to the correct value of 1t is evident as the number of samples
increases. From Table 1.4 it is clear that, using this unbiased rejection
calculation, almost 109 points must be selected in order to obtain an estimate
of 1t accurate to five significant digits. Even allowing for the speed of
modern computers this is not a very efficient method of calculating 1t.
1. Introduction 13
00
where J.l is the mean and f(t) is the probability density function of the
random variable t (see Chapter 2).
14 Chapter 1
We will show that the second central moment of the distribution is the
difference between the mean of the square of the estimated quantity, (x2 ),
and the square of the mean of the estimated quantity, (x)2. Here, angle
brackets are used to denote the estimated mean of a random variable. We
will further show that the estimate of the standard deviation obtained from n
samples is a/"n; i.e.,
a= (1.9)
n
- -
where f is a constant. Formally, f is the "average" value of the function
f(x) over the interval (a,b). One way to calculate this average is to evaluate
the functi.Qn at many random points xi within the domain of the integral and
estimate f by
(1.11)
To calculate (t) using Monte Carlo methods, one samples the integrand
over the range (a,b) and computes the sum of eqn 1.11 based on those
samples. Byeqn 1.10 the estimated value of the integral is then (t) times the
range of the integral, b - a. In its simplest form, the Monte Carlo sample
points are all treated the same; i.e., the "weights" of the sampled points are
equal. Such a calculation, in which the points xi are selected at random
regardless of how the function f(x) varies over the interval (a,b), is called
simple, unbiased, or analog Monte Carlo. It is also sometimes known as
"crude" or "brute force" Monte Carlo.
Since the Monte Carlo estimate of the average, (t), is based on a finite
number of points, n, all estimates of the integral will have an uncertainty
associated with them. To reduce this uncertainty, repeated calculations can
be made. If independent random number strings are used to select the xi'
each estimate will give a different result since the randomly selected xi will
vary from calculation to calculation. Therefore each such calculation will
provide an independent estimate of (t), and these estimates can be averaged
to improve the accuracy of the overall result; i.e., the information obtained
from previous evaluations is not lost but can be combined with the new
information to improve the accuracy of the result. In contrast, to increase
the accuracy of a deterministic result obtained from a given numerical
approximation to a definite integral one must increase the number of
16 Chapter J
intervals used in the calculation and discard any results obtained with fewer
intervals.
As we have seen, with Monte Carlo the process of estimating the answer
can also provide an estimate of the standard deviation of the answer. Thus
from the same calculation the user can obtain both the estimated result and
an objective measure of the statistical uncertainty in the result.
1- J1+xdx
5
2
(1.13)
o
b-a n 5 n 1
I ~-Lf(x)=-L-2 (1.14)
n i =1 n i =1 1+ Xi
This example uses only 100 points to evaluate the integrand and
produces a result with a fractional standard deviation (the standard deviation
divided by the mean) of approximately 10%. Furthermore, because the
random number list includes only two significant digits, several numbers are
duplicated. Thus the integrand is evaluated at the same point more than
once. Normally many more points would be selected, and more significant
digits in the random numbers would be used to avoid duplication.
Monte Carlo methods are well suited for the solution of problems that
can be cast in the form of an integral equation. In particular, the transport of
subatomic particles through matter lends itself to an integral form of the
Boltzmann equation that can be solved even in complex geometries using
relatively simple Monte Carlo techniques. Such Monte Carlo solutions to
transport problems have direct analogy to the physical processes of particle
interactions in the material involved, and the degree of detail modeled in the
particle interactions can vary with the objectives ~f the user.
The mathematical technique of Monte Carlo is a practical tool for
solving complex physical or mathematical problems. As computers have
improved in memory capacity and speed, and as research interests have
changed, the areas of application have expanded. The technique itself has
proved to be enduring, however, and knowledge of its application provides a
18 Chapter 1
tool to the researcher for use in many different fields. The present book will
introduce the reader to' practical approaches and use of computer solutions
of neutral particle transport problems using the Monte Carlo technique.
Exercises
1. The probability of throwing "box cars" (two sixes) with a standard pair of
dice is 1/62 ~ 0.02778. Use Monte Carlo sampling to verify this result and
to determine the probability of throwing box cars twice in a row.
2. Using the rejection technique, calculate the area under a half wavelength
of a sine wave,
It
Y = JSinx dx
o
Estimate the standard deviation using eqn 1.9, and show that the standard
deviation is reduced by approximately a factor of 2 if the number of samples
taken is increased by a factor of four.
1= J<l>(x)dx
0.25
using Monte Carlo and show that I ~ 0.4. Estimate the standard deviation of
your result.
) Lord Kelvin, "Ninteenth Century Clouds over the Dynamical Theory of Heat and Light,"
Phil Mag 6,2, 1901, pp. 1-40.
4 W. S. Gosset, "Probable Error ofa Correlation Coefficient," Biometrika 6, 1908, p. 302.
B A stochastic event is one for which the outcome cannot be predicted with certainty. Games
of chance such as the throwing of dice involve stochastic events.
21
22 Chapter 2
possible events accessible to the random variable is called the sample space
for the random variable. The mathematical definition of a random variable
is:
b A word on notation: {u in U:a ~ V(u) ~ b} denotes the set ofu in U such that a ~ V(u) ~ b.
This set will also be abbreviated as {u:a ~ V(u) ~ b}, {a ~ V(u) ~ b} or {a ~ V ~ b}.
2. Monte Carlo Sampling Techniques 23
Table 2.1. Sam pIe Sspace 0 fTh ree Ran dom V'
anabl es
Blue Die and Red Die Sum Product Largest
Combinations Number
blue one and red one 2 I I
blue one and red two 3 2 2
blue one and red three 4 3 3
blue one and red four 5 4 4
blue one and red five 6 5 5
blue one and red six 7 6 6
blue two and red one 3 2 2
blue two and red two 4 4 2
blue two and red three 5 6 3
blue two and red four 6 8 4
blue two and red five 7 10 5
blue two and red six 8 12 6
blue three and red one 4 3 3
blue three and red two 5 6 3
blue three and red three 6 9 3
blue three and red four 7 12 4
blue three and red five 8 IS 5
blue three and red six 9 18 6
blue four and red one 5 4 4
blue four and red two 6 8 4
blue four and red three 7 12 4
blue four and red four 8 16 4
blue four and red five 9 20 S
blue four and red six 10 24 6
blue five and red one 6 5 5
blue five and red two 7 10 5
blue five and red three 8 15 5
blue five and red four 9 20 5
blue five and red five \0 25 5
blue five and red six 11 30 6
blue six and red one 7 6 6
blue six and red two 8 12 6
blue six and red three 9 18 6
blue six and red four 10 24 6
blue six and red five 11 30 6
blue six and red six 12 36 6
C In some literature the "probability density function" is referred to as the "density function,"
the "probability distribution function," the "distribution function," or even as the
"distribution."
24 Chapter 2
2.1.2 Distributions
ff(t)dt
x
F(x) = (2.2)
-00
By its definition, f(x) is non-negative and the integral off(x) over R is unity,
ff(t)dt == 1
00
(2.3)
-00
f
b
F(b)- F(a) = f(t)dt = P{u:a < V(u) ~ b} (2.4)
a
Since the total area under f(t), from t = -00 to t = +00, is unity, the probability
of V taking on some value is unity. Expressions similar to eqns 2.2 - 2.4
exist when the pdf, and thus the cumulative distribution function, is a
function of more than one variable, but the present description will be
limited to functions of a single variable.
The random variable Vis said to be discrete if the set of values that V
can take is finite (or countable). For this case, the probabilities P{V = x}
take on non-zero values only for the possible discrete values of Xi. The pdf
is then defined by:
(2.5)
(2.6)
Further,
2.2 Sampling
events, represent those that have occurred for the purpose of the calculation.
This process is called sampling. To remain consistent with the
mathematical or physical process being modeled, the procedure for
performing the sampling must be a "fair game;" i.e., it must select events
from the sample space and count, or weight, them in such a manner that the
result obtained is identical to one that could occur in an experimental
realization of the process.'
The simplest way of playing a fair game is to insure that the probability
with which an event is selected for evaluation is equal to the probability
assigned to that event in the sample space. However, this is not the only
way to play a fair game. Selecting an event with a frequency that is
different from the probability assigned to it in the sample space while still
maintaining fair game conditions is called biasing. Biasing is an important
part of Monte Carlo and will be discussed in Chapter 6.
There are, from the above definitions, two funptions that may be
available for selecting an event from a sample space: the cumulative
distribution function and the pdf. The sampling of events uses the following
information:
• The sample space from which a particular event or sample is to be selected,
• The value of the random variables associated with every event in the sample
space (this is required so that the Monte Carlo model can deal with real
numbers),
• The cumulative distribution and/or the pdf for the random variables
involved in the problem, and
• A method of obtaining a sequence of random numbers.
These four elements are used in a Monte Carlo calculation in the following
manner:
• One or more random numbers are selected.
• These random numbers are used to select a point that is either in or is
associated with the domain of the appropriate probability function. This
point determines the value of a random variable for purposes of the
calculation.
• The random variable is either the result of the calculation or is in one-to-one
correspondence with the result.
Given a random number~, the corresponding random variable associated
with the cumulative distribution function F(x) and pdff(x) is given by
f
x
~ = p{ u: V(u) ~ x} = F(x) = f(t)<1t (2.8)
-00
Eqn 2.8 forms the basis for a significant portion of the Monte Carlo
technique. With this equation, random numbers distributed over the interval
2. Monte Carlo Sampling Techniques 27
from 0 to 1 can be used to select random events from a sample space; i.e.,
the random number ~ is used to select F(x), which assures a fair game.
(2.9)
If F-l(~) does not exist, or ifthe difficulty of solving for x using eqn 2.9 (or
a functionally equivalent inverse) is prohibitive, other methods of solving
for x must be used. For example, the function f(x) may be integrated
numerically. Tabular methods based on evaluations of F(x) over an
appropriate range can also be used. In practice, the mathematical models of
a number of real processes lead to cumulative distributions for which an
inverse does not exist. c In this case some indirect method, such as the
rejection technique, introduced in Example 1.2 and discussed below, must
be used to sample the random variable.
One commonly used distribution for which an inverse can be easily
calculated is the exponential distribution. The exponential distribution is
defined by
Of, equivalently,
0, x<o
F(x) ={ 1- e -T]X ,_
0< x (2.11 )
where 'Y) is a constant. For example, F(x) as defined by eqn 2.11 may be the
fraction of particles removed from a parallel beam of particles by scattering
or absorption after passing a distance x through a uniform material, or it may
be the fraction of a population of radioactive nuclei that have decayed after
C If the set {x:f(x)=O}, where f(x) is a pdf, is a Lebesque measurable set with measure greater
than zero, for x in the range of interest, then F') does not exist for x in that range. Consider
a beam of particles passing through two slabs separated by a vacuum. The probability of a
particle interaction in either of the slabs will be greater than zero. The probability of an
interaction while in the vacuum will be zero. The cumulative distribution function for this
configuration does not have an inverse.
28 Chapter 2
(2.12)
1
x = - - InC;) (2.14)
11
where the density of points throughout the sphere is n per unit volume. The
cumulative distribution function for the points N is thus the ratio of the
number of points in a sphere of radius r ~ 1 to the total number of points in
the sphere of unit radius; i.e.,
2. Monte Carlo Sampling Techniques 29
J41tnp 2 dp
F(r) = ~ = r3 (2.16)
J41tnp 2dp
o
Therefore, by eqn 2.8, to select a point at random within a unit sphere using
a random number ~, we have
(2.17)
Solving for the inverse, eqn 2.9, we obtain an expression for the radius r in
terms of the random number ~,
r=~ (2.18)
Using the random numbers from Table 1.1 we can evaluate 100 radial
points using eqn 2.18. Obviously the results will be somewhat coarsely
distributed because only two significant digits are available in that set of
random numbers. More significant digits would be required in the random
numbers to obtain a smooth distribution of cube roots.
The results are plotted in Figure 2.1 and are shown in Table 2.3. In the
figure the results of Table 2.3 have been grouped into intervals of one tenth
the total radius; i.e., the figure shows the number of times the cube root was
found to be in each radial interval {O.-O.l, 0.1-0.2, ... , 0.9-1.0} of the sphere.
'"
::J
'C
30
Eqn 2.18
~
25 Table 2.3
6
20
~
~ 15
~ 10
'0
5
~
E
::J
Z
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Radial n
I tervals
Figure 2.1. Distribution of Cube Roots from Table 2.3 Compared with Eqn 2.18
The volume of each of these spherical shells, normalized to the total volume
is also shown in the figure. This normalized volume is proportional to the
30 Chapter 2
distribution given in eqn 2.18. We see that the distribution of cube roots of
the random number string of Table 1.1 matches the density function of eqn
2.18 reasonably well considering that only one hundred random numbers of
two significant digits were used.
simple technique for selecting points uniformly within the area under a
curve, and thereby calculating the associated area, is called the rejection
technique. This technique was introduced in Example 1.2.
Consider a pdf f(x) that has a maximum value less than or equal to some
number M, and that is zero outside the range x E (a,b); i.e., the function is
zero for x S; a or x ~ b, and 0 S; f(x) S; M for all x. Therefore the area under
the curve defined by f(x) is enclosed within the rectangle bounded by 0 S; y
S; M and a < x < b. To use the rejection technique, one first picks a point
(x,y) uniformly within this rectangle. This is accomplished by using two
random numbers, ~I and ~2' one to select a y value uniformly from within
the interval [O,M],
(2.19)
and the other to select an x value uniformly from within the interval (a,b),
x=a+(b-a)~2 (2.20)
The point (x,y) is then examined to determine whether it falls under the
curve f(x). If the point is outside the area of interest, y > f(x), it is rejected
and another point is chosen. If the point is under the curve, y S; f(x), it is
accepted and the random variable is set to x.
In general terms, the rejection technique uses two or more random
numbers to select points uniformly in a space S that encloses the desired
space S'. The space S over which the points are selected is chosen to
simplify the calculation of the uniformly distributed random points; i.e., the
sample space is S rather than S'. Therefore only the inverse of the function
defining S is used in the sampling process, the inverse function defining S'
not being required. Any points selected from S that are outside S' are
rejected. Points that are inside S' are accepted. Since the points are selected
from a uniform distribution over the space S, the distribution of the accepted
points is also uniform over the enclosed space S'.
For the rejection technique to be efficient, it is necessary to ensure that
the fraction of points that are rejected is small. This means that the volume
of S should not be much larger than the volume of S', since the ratio of the
volumes S'/S will determine the expected fraction of all the points chosen
that will be accepted. It is also important that the process for choosing the
randomly selected points within S be as simple as possible so that a
minimum amount of time is spent in accepting or rejecting the points.
32 Chapter 2
Given a discrete set of numbers X = {XI. X2, ... , xn }, the expected value
of X, also referred to as the mean value of X, is given by
- 1 n
E(X)=X=-
n i=l
L>i (2.21 )
var(X) = cr 2 = -
1 n
n i=l
(Xi L - X)2 (2.22)
Finally, the standard deviation, cr, of the set of numbers is the positive
square root of the variance.
In analogy with the above, the expected value or mean value of a
continuous random variable V specified by the pdf f(x) is the weighted
average of the variable over its density function,
+00
+00
00
+00
f
00 00
and thus
The expression for the variance given by eqn 2.29 is useful for
calculating or estimating the variance of a random variable. It is valid for
both discrete and continuous random variables. However, the expected
value of a random variable may not exist, or may not be finite. If the
expected value is not finite, the variance does not exist. If the expected
value is finite, the variance exists but may not be finite. We will see
examples of the latter in Chapter 7.
The above expressions can be used, in some cases, to calculate the means
and variances of random variables analytically. Analytical calculations are
useful in examining various variance reduction techniques. To illustrate
this, consider Example 1.3. In that example we used Monte Carlo to
estimate the value of the definite integral
5 1
I=f-dx (2.30)
o 1+ x
2
Using the notation developed in this chapter, let us define the function
I / 5, 0::S;x::s;5
f(x) = { (2.31 )
0, otherwise
34 Chapter 2
00
ff(x)dx = 1 (2.32)
-00
5
Y(x)=-- (2.33)
1+ x 2
f f--2 dx
00 5 1
E(Y) = Y(x)ftx)dx = (2.34)
-00 ol+x
E(y2) = (2.36)
-00
_ 1
I>i
n
S=- (2.39)
n i=)
2 1~ - 2
var(S) =s) = - L..(v i - S) (2.40)
n i=)
2 1 ~ - 2 (2.41 )
S2 =--L..(v i - S)
(n -1) i=)
If the true mean of V is known, J.I. = V , then a third estimate of the variance
of V is
2 1~ 2
(2.42)
S3 =- L..(v i - f.I.)
n i= )
Only eqn 2.40 gives the variance of the sample S although all three
equations (2.40, 2.41, and 2.42) provide estimates of the variance of V, the
random variable from which the sample is taken. Eqns 2.41 and 2.42
provide unbiased estimates of the variance of V. However, none of the
36 Chapter 2
2
2 1 n 2 ( 1 n )
s =-Lvi - -Lvi (2.43)
n i=1 n i=1
- 1 a2
var(S) =-var(V) = - (2.44)
n n
6, the variance is (J2 = 29, and the standard deviation is (J ~ 5.3852. The first
column of Table 2.4 provides the set of all samples of two numbers taken
from V. A typical Monte Carlo sampling scheme does not remove the
numbers selected during the sampling process and thus values previously
selected are available for subsequent selection. Hence a number can be
repeated in a single sample. This is referred to as sampling with
replacement.
{I,I} I 0 0 25 0 0 5
{1 ,3} 2 I 2 17 I 1.4142 4.1231
{1,5) 3 4 8 13 2 2.8284 3.6055
{ 1,15} 8 49 98 53 7 9.8994 7.2801
{3,1} 2 I 2 17 I 1.4142 4.1231
{3,3} 3 0 0 9 0 0 3
{3,5} 4 I 2 5 I 1.4142 2.2360
{3,15} 9 36 72 45 6 8.4852 6.7082
{5,1 } 3 4 8 13 2 2.8284 3.6055
{5,3} 4 I 2 5 I 1.4142 2.2360
{5,5} 5 0 0 I 0 0 I
{5,15} 10 25 50 41 5 7.0710 6.4031
{15, I} 8 49 98 53 7 9.8994 7.2801
{15,3 } 9 36 72 45 6 8.4852 6.7082
{15,5) 10 25 50 41 5 7.0710 6.4031
{15,15} IS 0 0 81 0 0 9
Mean 6 14.5 29 29 2.75 3.8891 4.9195
4th and 5th columns, are equal to the true variance of the population.
Regarding the expected values for the standard deviations, none of the
equations used to provide estimates for the variance provides a correct
estimate of the standard deviation.
The variances of the estimates of the means - i.e., the variance of the mi
shown in the second column of Table 2.4, where i indicates the row number
in the table - can be calculated by
1 16 1 16
var(m .)=-"(m . _m.)2 =-"(m. _6)2 (2.46)
1 16 ~ 1 1 16 L.J 1
1=1 1=1
The result, using the mi from the table, is var(mi) = 14.5. Since each
individual mean in the example is calculated by taking two samples from the
total population V, from eqn 2.44 the variance ofthe mean is expected to be
- 1 0'2 29
var(S)=-var(V)=-=-= 145 (2.47)
n n 2
1 _(X-Jl)2
Y=--e 20 2 (2.48)
0'.[2;.
where J.l is the mean and 0' is the standard deviation of the distribution. Here
Y is the probability of finding a sample value from a normally distributed
popUlation within X of the mean. This means that approximately 68.27% of
the time the estimate of the mean will be within one standard deviation of
the correct value, 95.45% of the time the estimate will be within two
standard deviations of the correct value, and 99.73% of the time the estimate
will be within three standard deviations of the correct value.
5 I
I=J-dx (2.30)
o I +x
2
From the results ofthe Section 2.3, we know that defining the pdf
and defining a random variable V with this pdf gives an expected value of V
equal to the desired integral:
J
~
E(V) = V(x)f(x)dx =
_~
J--2
s 1
ol+x
dx (2.34)
Table 2.6. Results of Monte Carlo Calculation ofIntegral in Egn 2.30 - File 'Iout.txt'
no ofsamples= 1000000 Lower Limit= 0.000000 Upper Limit= 5.000000
Sample avg f stdev of f stdev of avg
10000, 1.36775816, 1.42216040, 0.01422160
100000, 1.37200967, 1.42437706, 0.00450428
200000, 1.37222952, 1.42399500, 0.00318415
300000, 1.37090677, 1.42269016, 0.00259746
400000, 1.37552614, 1.42602732, 0.00225475
500000, 1.37508446, 1.42528822, 0.00201566
600000, 1.37417428, 1.42447134, 0.00183898
700000, 1.37376240, 1.42457564, 0.00170269
800000, 1.37344235, 1.42427806, 0.00159239
900000, 1.37341085, 1.42379958, 0.00150082
1000000, 1.37270580, 1.42347290, 0.00142347
+00
it can be seen that, if all of the samples from a population are equal to the
mean value of the population, the variance is zero and each estimate is equal
to the mean. Even if the estimates cannot be made equal to the mean, the
closer they are to the mean the smaller the variance will be. The only way
the estimates can be made close to the mean is to change the distribution
from which the samples are taken. Of course, to be of practical use the
expected value of the random variable in the new problem must be the same
as the expected value of that in the original problem. It is not necessary,
however, to retain the original probability distributions. In fact, these must
42 Chapter 2
(2.50)
This gives
and
I, O~x~1
f) (x) = { (2.53)
0, elsewhere
1
V) (X) =--2 (2.54)
l+x
4
V2 (x) = - - 2 (2.56)
l+x
The normalization is selected such that f} and f2 are pdfs and the product
fj(x)Vj(x), for i = 1,2, is everywhere equal to the desired integrand. Let us
denote the expected value of V in the first stratum as E(V I), and the
expected value of V in the second stratum as E(V2)' Then
= f--2
) 1
E(V) dx~ 0.7853982 (2.57)
l+x
o
1 )2
E(V/) = f) ( 1+x 2 dx~ 0.6426991 (2.58)
o
44 Chapter 2
(2.59)
(2.60)
E(V 2) ==
2
f(_4_)2
5
1+x 2
dx ~ 0.5606200
4
(2.61)
•
(2.62)
The value of the original integral is then E(V 1) + E(V 2), and the variance of
this value is var(V) = var(V 1) + var(V 2). Thus
Comparing eqn 2.38 with eqn 2.65 shows that the use of these two strata
reduces the standard deviation of the random variable associated with the
integral from about 1.424 to 0.491, or by a factor of approximately 2.9.
Since the total number of samples is split between the two strata, the
improvement obtained using the same total number of samples is 2.9 divided
by the square root of two, or 2.05. This reduces the number of samples
needed to obtain the result to a given level of accuracy by a factor of (2.05)2,
or 4.2. A similar calculation using four strata with the ranges (0,0.5),
(0.5,1), (1,2) and (2,5), gives an expected standard deviation of 0.1646. The
value of stratified sampling is therefore potentially large in this problem.
For many random variables of interest, the range over which a Monte
Carlo estimate varies within a stratum can be less than that for the total
sample space. Since the variance of the variable depends upon this range,
and reducing the range reduces the variance, stratification generally reduces
the variance. In fact, provided the sampling is distributed pro rata among
the strata (the number of samples in each stratum is proportional to the
"size" of the stratum), in no case will stratified sampling reduce the
accuracy of a Monte Carlo estimate. Furthermore, unless the function being
sampled is a constant, or is cyclical with the stratification selected such that
each stratum contains an integral number of cycles, dividing Monte Carlo
sampling into strata and distributing the sampling pro rata or better - i.e.,
using certain biasing in the distribution of samples per stratum, to be
discussed later - will always reduce the variance of an estimate made from a
given number of samples. This means that it is difficult to reduce the
accuracy, and easy to improve the accuracy, of Monte Carlo results by using
stratified sampling.
2. Monte Carlo Sampling Techniques 47
a = Jf(x)dx (2.66)
A
Then
I - a = J f(x)dx (2.67)
R-A
Define
f(X) , for x in A
f.(x) = { a (2.68)
0, otherwise
and
Let VJ(x) and V 2(x) be random variables with pdfs fJ(x) and f2(x),
respectively, but with the same values as V(x) for each x in R; i.e.,
f For the purpose ofthis proof, a measurable set A of real numbers is any set that is a union of
intervals of real numbers.
48 Chapter 2
(2.70)
Then
E(V2) = E(V) - -
ao (2.74)
I-a
Therefore, if 0 > 0 - i.e., E(V \) > E(V) - then E(V 2) < E(V), and vice versa.
The variance of V is given by
t
var(V) = f[V(x) - E(V) f(x)dx
R
(2.75)
= f[V(x)- E(V)tf(x)dx + f[V(x)- E(V)tf(x)dx
A R- A
=~ J[V - E(v\)ff\(x)dx
nA
(2.79)
+ (1:a) J[V -E(V2)]2f2(x)dx
R- A
If E(V) = E(V l ) = E(V 2) then, comparing eqn 2.76 with eqn 2.80 gives
var(V) = var(aVl+(I-a)V2)' For the case where E(V l) does not equal E(V),
which is the assumption of the theorem, consider a function g(m) defined by
Similarly, ifE(V2) does not equal E(V) (and this follows ifE(V 1) does not
equal E(V» then
QED
(2.86)
made with n total samples, with m samples used for the first stratum and
n-m samples used for the second stratum. This gives
2. Monte Carlo Sampling Techniques 51
a I-a
w( m) = - var(V\ ) + - - Var(V2 ) (2.87)
m n-m
Setting the derivative ofw with respect to m equal to zero, and solving for m
such that 0 S; m S; n gives
n
m=---= (2.88)
1+£
where
k = (1- a)var(V2)
(2.89)
a var(V\)
That is, there exists an optimum partitioning of the n samples among the
strata. This optimization depends on the variances of the random variables
associated with the strata. Biasing stratified sampling schemes can further
reduce the variance of estimates of the means of random variables.
Optimum partitioning of a problem space for stratified sampling with Monte
Carlo is discussed in several texts. 6
As was noted earlier, the pdf of eqn 2.31 and the resulting random
variable of eqn 2.33 are not the only pdf and random variable that can be
defined that have an expected value equal to the value of the integral in eqn
2.30. One may analyze eqn 2.30 and determine alternative definitions of the
pdf and random variable such that the problem is not only solved, but the
variance of the result is reduced.
In the general case of biased sampling (not just biased per a stratification
scheme), the samples are picked from a modified pdf. Consider the random
variable Vex) with pdff(x). The expected value of V, E(V), is then given by
J
E(V) = V(x)f(x)dx (2.90)
R
Given a different pdf g(x), such that g(x) > 0 everywhere that V(x)f(x) >0,
we have
52 Chapter 2
and E(V') = E(V). However, it is not necessarily the case that the variance
of V' is the same as the variance of V. The proper selection of g(x) can
result in the variance being significantly reduced.
To illustrate the potential for reducing the variance of the estimate of the
expected value ofV(x), assume that g(x) can be found such that
and
I, for x in (0,1)
f(x) = { (2.96)
0, elsewhere
Then
Consider a Monte Carlo estimate of this integral. The results of using 106
samples to estimate the integral without biasing are
- Value ofIntegral = 1.71825757
- Standard Deviation of Random Variable = 0.49168896
- Standard Deviation of Estimate ofIntegral = 0.00049169
In order to select a possible pdf for biasing, we note that a straight line
from (0,1) to (1,2.718) has somewhat the same shape as the function we
wish to evaluate. Using this line, with the appropriate normalization, to
define a modified pdf g(x) gives
The Monte Carlo results for this biased sampling scheme are
- Value ofIntegral = 1.71823566
- Standard Deviation of Random Variable = 0.06507303
- Standard Deviation of Estimate ofIntegral = 0.00006507
biased result is more reliable than the unbiased result. The modified pdf
chosen here is not an optimal scheme in any normal sense of the word. It
was chosen because it was easy to define and easy to implement. Yet even a
simple scheme such as this produces a significant reduction in the variance
of the result.
Exercises
1. Using eqns 2.8 and 2.9 derive a sampling scheme for selecting the radii of
points unifonnly distributed over
a. a disk of radius ro
b. an annulus of inner radius ri and outer radius roo
c. a spherical shell of inner radius ri and outer radius roo
1 See, for example, L. Lyons, Statistics for nuclear and particle physicists, Cambridge
University Press, Cambridge, 1986. A discussion of random variables is given in Chapter 2
of 1. Honerkamp, Statistial Physics, Springer-Verlag, New York, 1998.
2 In the early days of Monte Carlo it was time consuming to evaluate a logarithm on a
computing machine and techniques were developed to permit sampling from the exponential
distribution without such an evaluation. For examples see John von Neumann, "Various
Techniques Used in Connection With Random Digits," Monte Carlo Method, A. S.
Householder, G. E. Forsythe, and H. H. Germond, eds., National Bureau of Standards
Applied Mathematics Series 12, U. S. Government Printing Office, Washington, D.C.,
56 Chapter 2
1951, p. 38; and E. D. Cashwell and C. J. Everett, A Practical Manual on the Monte Carlo
Methodfor Random Walk Problems, Pergamon Press, New York, 1959, pp. 119-20.
3 Robert B. Ash, Real Analysis and Probability, Academic Press, New York, 1972, pp. 321 ff.
4 Lyons, op. cit., pp. 13ff.
5 This theorem is similar to the "parallel axis theorem" in physics. See Numerical Recipes in
Fortran 77: The Art of Scientific Computing, Cambridge University Press, 1986-1992,
Chapter 7, p. 308.
6 See Reuven Y. Rubinstein, Simulation and The Monte Carlo Method, John Wiley and Sons,
New York, 1981, p. 133
Chapter 3
Monte Carlo Modeling of Neutron Transport
3.1 Introduction
~=Ncr (3.1)
(3.2)
dn
-=-~ n (3.3)
dx t
3. Monte Carlo Modeling ofNeutron Transport 59
ct>
Jxe-l:,Xdx
A=..!Co_ _ _ =_ (3.5)
ct>
I: t
Je-l:,Xdx
°
The quantity A, the mean distance a particle with a total interaction cross
section I:t travels between collisions, is called the mean free path (mfp).
'P=Nv (3.6)
Here we distinguish between the particle's vector velocity v and its speed v,
v=vn (3.7)
(3.8)
Here the integral over 41t refers to integration over the entire solid angle
about the point r. The constant of proportionality relating the neutron
scalar flux to the neutron reaction rate per unit volume, R, is the
macroscopic interaction cross section, ~, for the reaction of interest,
3. Monte Carlo Modeling of Neutron Transport 61
(3.9)
I 8
- - 'I'(r,n,E, t) + n. V'I' + L t 'I' - S =
vat (3.11)
fJ I " n E ~ n , E) dE dn
'I'(r, n , E , t)L s(r'
I I I I I
where S is a source that does not depend on '1'. The scattering kernel Ls in
eqn 3.11 refers to a change in the neutron coordinates from the primed to
the unprimed values during a collision at r. Using the method of
characteristics, we can obtain an expression for the flux at the point r, in the
direction n, at the time t. 2 We introduce the line through r in the direction
n and call the coordinate along this line s. We note that
d'l' 8'1' dt 8'1' dx 8'1' dy 8'1' dz
-=--+--+--+-- (3.12)
ds at ds 8x ds 8y ds 8z ds
(3.13)
Because the right sides of eqns 3.12 and 3.13 are the same, we can equate
the left sides of these two expressions to obtain
(3.14)
1 8\f 8\f dt dt 1 s
--=--~-=-~t=t +-
v at at ds ds v 0 v
8\f 8\f dx dx
~-=--~-=~~x=x +s~ (3.15)
ax ax ds ds 0
8\f 8\f dy dy
~-=--~-=~ ~y=Yo +s~
By By ds ds
8\f 8\f dz dz
~-=--~-=~~z=z +s~
8z 8z ds ds 0
where "0, Yo, Zo, and to are arbitrary constants. The latter three equations
can be combined into
(3.16)
(3.17)
JJL s(ro + sO;O', E' -+ O,E)\f(ro + sO,O',E ',t o + ~)dO'dE'
v
f
s
e- (3.18)
gives
•
f1:,(ro +s'o.E)ds' S
Integrating this equation over s from - 00 to 0, assuming that over all phase
space
64 Chapter 3
gives
o s
J1:, (ro +s'O,E)ds' 0 J1:, (ro +s'O,E)ds'
\l'e -'" = Ie -'" qds (3.21 )
Using
and the fact that the integral from -00 to 0 in the exponential is not a
function of s gives
Changing signs for both sand s', changing variables so that ro = r and to = t,
and expanding q according to the right-hand side of eqn 3.17 gives the
desired form of the Boltzmann transport equation,
\I'(r,!l, E, t) =
+ Ie-~S(r-s!l,!l,E,t-~)ds
o v
(3.24)
s
13= ILt(r-s'!l,E)ds' (3.25)
o
3. Monte Carlo Modeling of Neutron Transport 65
Thus, by eqn 3.24, the angular flux at a point can be expressed as integrals
over all possible sources and neutron flight paths leading to that point and
meeting the appropriate energy, direction, and time restrictions.
In operator notation, eqn 3.24 can be written as
(3.27)
(3.28)
This solution is called the von Neumann series. '1'0 is the angular flux from
the source that arrives at the point in question without undergoing
collisions, '1', is the angular flux arriving that has undergone exactly one
collision, and so on. The Monte Carlo tracking of source particles that have
not undergone a collision provides an estimate for '1'0' the continued
tracking of particles that have undergone one collision provides an estimate
of '1'" etc. The tracking process therefore estimates the von Neumann
series solution to the integral formulation of the transport equation.
1. Problem Definition
a. Define the problem geometry
The physical description of the problem geometry and material constituents
is fundamental to a proper definition of the problem. All particle
interactions, flight paths, and escapes from the problem geometry are based
on this description.
b. Define the source
Even in eigenvalue calculations (discussed in Chapter 8) a source must be
defined for the random walk procedure to be initiated. This source may be
a pdf from which the calculation selects start particles, or may be a specific
set of particle start points, directions, energies, and times. The initial
source is externally applied; Le., it does not arise from prior interactions of
the neutrons being tracked with the material constituents of the problem.
2. Random Walk
a. Select a Source Particle
A neutron is selected from the source distribution. The neutron is given an
initial position, energy, time, and direction of travel, according to the
problem parameters.
b. Determine a collision point
A collision site for the neutron is selected from the exponential distribution
of collisions along its flight path. The cross sections of the materials
through which the neutron is traveling are used to obtain the probability of
collision per unit path length.
c. Determine the type of interaction
Once a point of interaction is chosen, the total cross section is apportioned
pro rata among the nuclear species present. After selecting a nuclear
species, the cross section for that species is used to determine which type
of interaction has occurred. An alternative technique for handling
interaction cross sections is to average, or "mix," the cross sections so that
one combined set contains the features of all the constituents. A
multigroup formulation is also possible, in which the neutron energies are
cast into a set of discrete bins or groups, and a table of group-averaged
cross sections is used to determine the interactions.
d Determine the result of the interaction
The result of the interaction is selected from one or more of the following
alternatives: death of the neutron by absorption (or reduction of the
"weight" of the particle by the non-absorption probability); scattering of
the tracked particle through some angle selected from the particular angular
scattering characteristics of the nucleus encountered; or production of
secondary particles, including fission.
e. Complete the history
All secondary particles (or a statistically valid sampling from them), as
well as the scattered neutron, are tracked to determine subsequent collision
points and products. This process is continued until the initial neutron, and
all secondary particles produced by the initial neutron, either die by some
direct or statistical means, or escape from the geometry.
3. Monte Carlo Modeling ofNeutron Transport 67
3. Compute the response
Use the result of the random walk to calculate the detector response being
modeled by the calculation. This is most commonly done simultaneously
with the random walk, but may be done by means of a post-random-walk
event file.
The execution of the steps listed above can entail a time-consuming
calculation. In practical terms a Monte Carlo calculation must be able to
produce statistically meaningful results using relatively few particle tracks
to simulate the average effect of a large population of neutrons. This may
require a high degree of care on the part of the practitioner. In many cases a
careful study of the problem to cast it in appropriate terms for Monte Carlo
analysis can result in considerable savings in computation time. Some of
the techniques available for increasing the efficiency of Monte Carlo
calculations are discussed in Chapter 6.
1. Problem Definition
a. Define the problem geometry
Consider a spherical geometry with a radius of 10 mfp centered around a
point source. For convenience we will assume ~t = 1; i.e., all distances
will be measured in units ofmfp. For the purpose of obtaining reasonable
spatial resolution of the flux, the geometry description will allow for the
Monte Carlo tally, or scoring, of collisions that take place in shells of
thickness 0.1 mfp centered around the point source.
b. Define the source
Because both the problem geometry and the method used for scoring
collisions are spherically symmetric, the initial direction selected for the
start particles is arbitrary. Therefore we will place the origin of our
coordinate system at the location of the source and assume all start
particles are emitted in the +Z direction. The problem is monoenergetic
so definition of the energy of the source particles is unnecessary.
2. Random Walk
Tables 3.1 and 3.2 show the Fortran code used to estimate the collisions in
the geometry defined above as a function of radius. The steps in the
calculation follow the random walk steps defined above. The radial bin
boundaries could be input by the user, but for this example they are
calculated based on a fixed O.I-mfp thickness.
3. Compute the response
The number of collisions is tallied in each radial bin.
After requesting the number of particles to be tracked and the start
random number seed, the program tracks the particles through the problem
geometry. The flight path for the first particle is along the +Z axis and a
first collision point is picked using eqn 2.14. Following this first collision
the direction for the second flight path is determined by selecting a random
unit vector. The length of this second flight path is obtained and, by eqn
3.16, the second collision point is determined. Each collision is scored by
testing the radius of the collision point, finding the radial bin that contains
this point, and incrementing a counter for that radial bin.
Calculating a new random vector originating at a point r is equivalent to
finding a random point on the surface of a unit sphere. The differential area
of a conic section of the spherical surface between the polar angles e and e
+ de is proportional to cos e. Hence the random point is found by first
calculating a random cosine for the polar angle over the range (-1,1), after
which a random azimuthal angle <p is selected over the range (0,21t) (see
Figure 3.1). The coordinate transformation for selecting random points
over a range different from that of the random number generator being used
is given by eqns 2.19 and 2.20.
3. Monte Carlo Modeling ofNeutron Transport 69
For the isotropic, post-collision unit vector defined by the angles e and
<p, the Cartesian direction cosines are obtained from
Having defined the post-collision direction of travel the program loops back
to repeat the previous steps. Since particle escape is the only means for
terminating a track, because of the assumption of no absorption, the looping
is continued until the particle escapes from the problem geometry. Another
particle is then tracked.
~--+--7-Y
sine coscp-r----~
From eqn 3.9, the scalar flux cI> ( cmo2seco1 ) is related to the collision rate
density R (cm o3sec·1) by
(3.30)
where cl>i is the average flux in bin i and nj is the average number of
collisions in bin i per start particle. The nonnalization constant k is equal to
the reciprocal of the volume of the region over which the collisions are
summed. This nonnalization converts the number of collisions per second
in the bin into a collision rate density.
a Unless otherwise indicated, the start random number used for examples in this text that use
= I. Uncertainties are not shown in Table 3.3. They could be
'fltrn' will be unity, 'seed'
calculated using eqns 2.43 and 2.44, as will be shown in Chapter 5.
72 Chapter 3
9000,---------------,
c:
~c:
o
~
"0
()
o~-~--~--~--~-~
o 2 4 6 8 10
Radius (mfp)
Figure 3.2. Collisions Versus Radius for Example 3.1
Because the macroscopic cross section assumed for this problem is unity
(1 cm-\ the flux is equal to the collision rate density. Thus we can plot the
flux as a function of radius by dividing the number of collisions in a bin per
start particle by the volume of the bin. The result is shown in Figure 3.3. In
this figure a depression in the flux caused by the vacuum boundary
condition at r = lOis apparent.
When possible it is desirable to compare Monte Carlo results with
analytic results. This is straightforward for the current example. We know4
that for small r,
(3.31 )
where in this example the source S = I. This is the equation for the flux as
a function of distance from a point source for particles streaming in a
vacuum. In the example problem, at locations near the source, collisions
will be negligible and this approximation should hold. Thus for small r, <I> -
0.08/r2. Using the Monte Carlo results from Table 3.3 with eqn 3.30, the
average flux in the first bin, 0 ~ r ~ 0.1, is
1.E+2 . - - - - - - - - - - - - - - ,
1.E+1
1.E+O
1.E-1
1.E-2
where Viis the volume of the first tally bin. Although the results of eqns
3.32 and 3.33 are in fair agreement, eqn 3.31 underestimates the flux even
in the first space point because the streaming approximation neglects
scattering events. Scattering events can allow particles to pass through a
region more than once, thereby increasing the flux.
An estimate of the flux at a significant distance from a point source can
be obtained from diffusion theory. Diffusion theory results are useful
because they can often be expressed analytically.s However, the results
obtained from diffusion theory are approximate. The diffusion theory
solution for the flux at radius r from a point source at the center of a
uniform sphere with pure scattering is
where R is the outer radius of the sphere. In order to increase the accuracy
of the diffusion solution, the outer radius of the sphere can be adjusted by
an extrapolation distance to compensate for the fact that diffusion theory
does not have an exact equivalence for a vacuum boundary condition. We
can do this by setting R to the actual outer radius plus 0.71 mfp.6
In addition to its limitations at the outer boundary, diffusion theory also
provides inaccurate results in the vicinity of a source. Comparing the
diffusion theory result of eqn 3.34 with eqn 3.31, the latter of which is a
good approximation very close to the point source, it can be seen that the
diffusion equation gives a Ifr dependence where the better result has a IIr2
dependence.
A "hybrid" diffusion theory solution can be derived by using the first
collision events from the point source as the source for the diffusion theory
problem.7 The total flux is then obtained by adding the uncollided flux to
74 Chapter 3
the result obtained from diffusion theory using the first collision source.
This hybrid result is
Ei(-R)]} (3.35)
co -u
Et(x}= feu dU=-Ei(-x} (3.36)
x
and R is the radius of the sphere plus the extrapolation distance. Figure 3.4
shows the results of eqns 3.34 and 3.35 compared with the Monte Carlo
results for r < 4. The figure shows that the hybrid diffusion results are in
reasonable agreement with the Monte Carlo results at small radii, where the
pure diffusion results are low, and that all three results agree for r greater
than about two mfp. Based on this comparison it appears that the Monte
Carlo results are accurate and, further, that the variances are likely to be
small.
1.E+2
1.E+1
1.E+O
)(
::l
u::
1.E-1
1.E-2
1.E-3
0 2 3 4
Radius (mfp)
(3.37)
Francois lO has published the same result and eqn 3.37 uses his notation.
Monte Carlo techniques can be used to verify this result by modeling a
pure absorber. For this problem, the location of start particles uniformly
distributed over a spherical volume is taken from Example 2.1. Eqn 2.18
provides the estimator for selecting a radius r in a spherical volume. The
direction in which a gamma ray is emitted is random, as in the laboratory
isotropic scatter model in Example 3.1. Thus we will take pieces of the
coding we have developed in the past examples and build a specialized
program to determine J/J o•
In order to determine the number of gamma rays that escape from the
source volume without experiencing a collision we need to tally all random
walks whose initial flight path would produce a collision point outside the
source volume. For a source gamma born with random initial direction at
position r o, and flight path determined by eqn 2.14, we can easily
accomplish this by finding the pseudo-collision point r from eqn 3.16. If
the radius of the sphere R is less than Irl the particle escapes without
suffering a collision. Otherwise the particle has an interaction within the
source material.
76 Chapter 3
particles, and the start random number seed. Using 106 start particles for
selected values of the source radius, the probabilities of source particles
escaping are shown in Table 3.5. Shown for comparison are the results
from eqn 3.37. The results are in excellent agreement.
where N(O) is the number of incident neutrons. However, the effect of the
scattering layer in changing the direction of the neutrons, and the possibility
that some of the scattered neutrons will eventually pass through the slab, is
not accounted for in this uncollided estimate.
A Fortran program that tracks particles within this two-layer slab and
scores those passing through the slab is shown in Tables 3.6 - 3.8. The
particle tracks can be terminated only by a particle passing through the slab,
experiencing an absorption in the second layer, or being backscattered out
of the first layer. The program keeps track of the fate of all particles -
those reflected, absorbed, or transmitted - in order to tally all of the input
particles and verify the particle balance.
78 Chapter 3
Table 3.6. Fortran Program for Parallel Beam on a Two-Layer Slab - Example 3.3
! NR - I1IJ!li:Jer of particles passing thru slab (to right)
NL = nunber of particles reflected fran slab (to left)
NA = nunber of particles absorbed in the slab
dN =-f...N (3.40)
dt
(3.41 )
3. Monte Carlo Modeling of Neutron Transport 81
This is valid for any No and to one wishes to choose, independent of the age
of the sample and the number of atoms that have already decayed. The
mean time to decay of the population No at time to is
(3.42)
which is independent of the time to. That is, no matter how long a
radioactive nucleus has existed in the past, its mean time to decay is always
1IA. from the present time.
In direct analogy, the mean distance a neutron will travel to its next
collision is always its mean free path, independent of the distance it has
traveled since its last collision. Therefore, when tracking a particle in
Monte Carlo, at any point along the particle's trajectory we can discard the
flight path already traveled and select a new flight path for the particle from
its current position. We can do this as often as we wish without changing
the answer to the problem. Obviously the new flight path will not be the
same as the previous one, but the average number of mean free paths
between collisions will remain the same.
1
Z= --In(~) (3.43)
~t
through each surface we discard the old flight path and select a new one.
For this case we must determine the total distance to a collision by adding
the piecewise segments until a new flight path less than O.1/~t is selected.
When this happens the collision occurs before the next artificial surface is
reached and hence the flight path is terminated.
A Fortran program that calculates the mean distance to a collision using
these two methods is shown in Table 3.9. A unit macroscopic cross section
is assumed. In the first part of the program values of z are selected
according to eqn 3.43, and are averaged to obtain an estimate for f... In the
second part of the program values of z are selected by summing segments of
O.I~t until a ~ value greater than e-o.! ~ 0.9048 is selected. When this
occurs, a flight path less than O.I~t has been found. If n random numbers
less than 0.9048 were generated before selecting ~ greater than this amount,
the total flight path, in mean free paths, is then f.. = O.1n + (-ln~)/~t.
The result of running the program in Table 3.9 for various numbers of
samples, and for ~t = 1, is shown in Table 3.10, starting with a seed of one
for the first run. Even using only 1000 samples the two techniques give
results very close to unity. In the limit of a large number of flight paths,
assuming a good quality random number generator is used, the two results
3. Monte Carlo Modeling ofNeutron Transport 83
will converge on the exact answer within the limits of computer word
length.
Table 3.10. Results of Direct and Indirect Calculations of Mean Free Paths
No. trials Direct Indirect
10" 1.0036740 0.9892778
104 0.9973210 1.0070934
105 0.9997291 1.0019236
106 0.9983103 1.0001979
Consider the slab problem of Example 3.3 but assume a spherical void or
bubble exists in the slab. The center of the spherical void is located at the
interface between the two materials. The diameter of the void can be a
variable quantity but, for this illustration, will be assumed to be unity. The
bubble is situated directly in front of the point at which the beam strikes the
slab, with its center at the origin of a Cartesian coordinate system. The
neutron beam is incident along the Z axis in the +Z direction. The
geometry is shown in Figure 3.5.
Because this problem has an axis of symmetry along the beam, it is not
essential to track the particles in three dimensions. However, in transport
problems requiring the full capability of Monte Carlo, tracking in three
dimensions will be required. Therefore, because the complexity of this
problem will not be substantially increased by three-dimensional tracking,
and because the concept is necessary in later problems, three-dimensional
tracking will be used.
84 Chapter 3
Slab I Slab 2
Bubble
Incident
Neutron r--+--+--Z
Beam
Since the interior of the sphere is a void, it has zero total cross section
and all particles entering it will simply stream across to enter the material
on the other side. Calculation of the flight path of the particles must take
this into account. Assume a collision has taken place at a point p, with
coordinates Xo, Yo, Zo inside slab 1. The particle leaves the collision in
direction n, with direction cosines u, v, w along the Cartesian axes. We
introduce the path length variable, s, along the flight path to specify the
particle's location according to eqn 3.16,
x = xo+us
y=yo+vs (3.44)
z= zo+ws
In order to determine the intersection of this flight path with the bubble we
will also need the equation for a sphere of radius ro at the origin
(3.45)
To solve these equations we substitute the expressions 3.44 into eqn 3.45
and, after simplifying, obtain
(3.46)
where p is the radial distance from the origin to the point p. Here p is used
to designate both the collision point and the vector p = (Xo,yo,Zo). Thus
(3.47)
3. Monte Carlo Modeling ofNeutron Transport 85
and
p. n = uxo + vy + wZo
0 (3.48)
(3.49)
where subtracting or adding the second term determines the entry and exit
points of the particle's path through the spherical void, respectively. If the
discriminant (the term in brackets in eqn 3.49) is negative, the particle's
current flight path does not pass through the sphere. If both solutions for a
non-negative discriminant are negative, the backward extension of the flight
path passes through the sphere. In this case the particle is directed away
from the origin and its path will not pass through the bubble.
If eqn 3.49 has two positive, real solutions, the particle will pass through
the sphere if the flight path is sufficiently long to reach the void. In this
case either any remaining flight path must be added after the particle
streams across the void and strikes the material on the other side, or a new
flight path must be chosen beginning at the latter point. For this problem
we will use the former method. Finally, if there is both a positive and a
negative real solution of eqn 3.49, the particle's current location is inside
the sphere. Since there can be no collisions in the void this should not
occur in the present problem.
A Fortran program to calculate the movement of particles through the
geometry of Figure 3.5 is shown in Tables 3.11 and 3.12. The subroutines
'Input' and 'Output' are the same as those shown in Tables 3.7 and 3.8, and
are not repeated here. This program is more complicated than the program
of Tables 3.6 - 3.8, but still has readily recognized features in common with
that program. The geometry module performs the tests outlined above to
determine whether a flight path encounters the bubble. The scattering
routine 'Isoout' is from Table 3.2.
The results obtained by executing the program in Tables 3.11 and 3.12
for a bubble diameter of one mfp using various numbers of incident particles
are shown in Table 3.13. These results show that the probability of an
incident particle passing through this slab is 0.2083 ± 0.0004 compared with
0.126 ± 0.003 for the slab without the spherical void. Thus the bubble
increased the probability of particles passing through the slab by about a
86 Chapter 3
esuItsf0Example
Table3 13 R I 3 .5
No. source Transmission Reflection Absorption
particles
104 0.206 ± 0.004 0.489 ± 0.005 0.305 ± 0.005
lOS 0.2090 ± 0.0013 0.4870 ± 0.0016 0.3040 ± 0.0015
106 0.2083 ± 0.0004 0.4884 ± 0.0005 0.3034 ± 0.0005
Exercises
1. By converting the collision density distribution of eqn 3.4 into a pdf, show
that the "expected value" of the distance traveled is liLt.
1 The definitions of tenns used here follow those of G. I. Bell and S. Giasstone, Nuclear
Reactor Theory, Van Nostrand Reinhold Co., New York, 1970, pp. 4-6.
2 Ibid., pp. 22 ff. See also K. M. Case and P. F. Zweifel, Linear Transport Theory, Addison-
Wesley, Reading, MA, 1967, pp. 43ff.
3 K. M. Case, F. de Hoffinann, and G. Placzek, "Introduction to the Theory of Neutron
Diffusion," Los Alamos Scientific Laboratory, Los Alamos, NM, June 1953,pp. 66 ff.
4 Ibid., p. 70.
s A. V. Weinberg and E. P. Wigner, The Physical Theory of Neutron Chain Reactors,
University of Chicago, Chicago, 1958, Chapter VIII, pp. 181 ff.
6 Case, de Hoffinan, and Placzek, op. cit., p.136.
7 Ibid., pp. 71 ff.
S M. Abramowitz and I. Stegun, eds., Handbook of Mathematical Functions, Dover
Publications, Inc., New York, 1965, p. 245 .
9 Case, de Hoffinann and Placzek, op. cit., p. 28, eqn 13.
10 J.-P. Francois, "On the Calculation of the Self-Absorption in Spherical Radioactive
Sources," Nuc Inst Meth 117, 1974, pp. 153-56.
11 Robert V. Meghreblian and David K. Holmes, Reactor Analysis, McGraw-Hill, New York,
1960, p. 419.
Chapter 4
Energy-Dependent Neutron Transport
Fast neutrons generally lose energy when they undergo collisions with
nuclei. When the incident neutron energy exceeds the lowest excitation
level of the target nucleus such interactions can be inelastic. In this case the
target nucleus absorbs some of the neutron's kinetic energy and the system
kinetic energy is not conserved in the interaction. Therefore the energy loss
may be uncorrelated with the scattering angle. However, under some
conditions the internal energy of the target nucleus may not be changed by
the neutron interaction. Classically this occurs when the neutron energy is
below the lowest excitation energy of the target nucleus. In this case the
scattering is elastic and kinetic energy is conserved. In many materials the
slowing down of neutrons from a few hundred keV to thermal energies may
be described to a reasonable degree of accuracy by the physics of elastic
collisions. For hydrogen, this range of validity extends from a few MeV to
thermal energies.
Let us consider the elastic interaction of a point neutron with an unbound
point nucleus. Such an interaction will consist of isotropic, elastic
scattering in the center of mass of the interacting particles. For practical
purposes we can assume the target mass is an integral multiple of the
neutron mass and use the atomic mass, A, to characterize the target.
The neutron-nucleus interaction is shown schematically in the laboratory
and center of mass coordinate systems in Figure 4.1. In the laboratory
coordinate system, which we will designate the L system, we assume the
neutron moves toward a stationary target nucleus with speed Vb where the
subscript L designates quantities measured in the L system. The total mass
89
90 Chapter 4
of the two colliding particles is A+l, and the speed Vern of the center of
mass, as measured in the L system, is
vL
V =-- (4.1)
em A +1
The speed of the neutron in the center of mass coordinates, which we will
designate the C system, prior to the collision is therefore
(4.2)
(4.3)
y'
yc - -........
/
£v c
v~
Laboratory (L) Center of Mass (C)
v~+AV~=O (4.4)
where the primes denote post-collision variables. For elastic collisions, the
total energy of the interacting particles, in either system, must be unchanged
by the collision. This requires
V
e
2 + AVe 2 = v,2
e
+ AV,2
e (4.5)
4. Energy-Dependent Neutron Transport 91
(4.6)
and
(4.7)
Thus an observer in the C frame sees the particles approaching each other
along a straight line, interacting, and separating along another straight line.
The post-collision trajectory line passes through the collision point but is
rotated with respect to the pre-collision line. Each of the particles leaves the
collision site at the same speed with which it approached. The interaction is
e
completely defined by the angle of rotation between the incoming and
outgoing linear trajectories (see Figure 4.1.)
The relation between the post-collision neutron velocity in the C and L
coordinates is shown in Figure 4.2. From this figure, by the law of cosines,
or
V,2
L
2 +v,2 +2v v' cos
=v em e em e
e (4.9)
Substituting eqns 4.1, 4.2, and 4.6 into eqn 4.9 gives
-=-------
e
E' A 2 + 2A cos + 1
(4.11 )
E (A+l)2
92 Chapter 4
V
em
Figure 4.2. Relation Between Neutron Scattering Angles \II and e in Land C Coordinates
A cos e + 1
cos'" = --;======= (4.13)
~ A 2 + 2A cos e + 1
(4.14)
-E - 1+a. E (4.15)
r --2- 0
where a.Eo is the lowest possible post-collision energy. From eqn 4.11, E' is
a minimum for e = 1t, and thus
_(A_1)2
0.- - - (4.16)
A+ 1
particle started with an energy of 1000, after one collision it would have an
energy of 100, after two collisions the energy would be 10 and after three
collisions the energy would be 1. That is, using logarithms to the base ten,
the number of collisions required to reduce the neutron energy from 1000 to
1 would be equal to the logarithm of EJEr divided by the logarithm of the
factor by which the energy decreases at each collision; i.e., three. Therefore
one estimate of the average number of collisions required to reduce the
energy of a neutron from an initial energy Eo to a final energy Er is to
assume that each collision produces exactly the average loss. By eqn 4.15
this gives
ln~
N - Eo (4.17)
L - l+a
In--
2
Eqn 4.17 gives the linear average of the number of collisions required to
reduce the energy of a neutron from Eo to Er. However, for neutron
downscatter a linear average is considered less accurate than a geometric
average. The geometric average number of collisions is
- 1 Eo
NG =-In- (4.18)
~ Er
(4.19)
where p(E2)dE2 is the probability that a neutron with initial energy E, will
have a post-collision energy between E2 and E2 + dE2. This can be shown2
to be
(4.20)
r _ I (A - Ii I A-I
~- + n-- (4.21)
2A A+l
a The Cd cutoff corresponds to a sharp rise in the Cd absorption cross section with decreasing
neutron energy. The energy chosen for the cutoff is typically 0.415 eV. Cd is often used to
differentiate between fast and slow neutrons in detectors. See S. Glasstone and M. C.
Edlund, The Elements ofNuclear Reactor Theory, D. Van Nostrand, Princeton, NJ, 1952, p.
55; G. F. Knoll, Radiation Detection and Measurement, J. Wiley and Sons, New York, 3rd
ed., 2000, p. 505.
96 Chapter 4
Table 4.1. Program to Evaluate Number of Elastic Collisions to Reduce Neutron Energy to
Cd Cutoff
Determine the rrean m.rrber of oollisioos to Cd cutoff (0.415 eV)
for a target of atonic nass A, neutroos with start energy E (M:!V)
~ ECD/0 . 415E-6/
4 WRITE (*,' (lX,A\) ' ) , Enter target nass A, set=O to step: Inplt variables
READ (*, *) A; IF (A.LE.0 . 1) S'roP
WRITE (*,' (lX,A\) ') , Enter neutron energy E (~V):
READ (*, *) EO; IF (EO. LT. ECD) S'roP
WRITE (*,' (lX,A\) ') , Enter nUTber of start particles N:
READ (*, *) N; IF(n.LE. O) S'roP
WRITE (*,' (lX,A\) ' ) , Enter randan m.rrber seed (an integer): '!
READ (*, *) ISEED; CALL m::li.n(iseed) Inplt a:nplete
TAI1..Y = 0.; TAI1..YSQ = O. ! set tally variables to zero
lJ:x:p Oller Particles :[X) I = 1,N 'Track' particles, locp over N start particles
EP = ill start each particle with energy EO
NSCAT = 0 NSCAT is the nUTber of scatterings
lJ:x:p Oller Energy:[X) locp while above cutoff energy
IF (l:P.LT .ECD) EXIT lJ:x:p Oller Energy ! score i f belCM cutoff energy
E = EP - - ! update pre-collision energy
CALL ISCXXlL2(E, EP,A) ! get post oollision energy EP
NSCAT = NSCAT + 1 !
IF(NSCAT .LT . 10000)CYCIE lJ:x:p_OIIer_Energy
eqn 4.17 is 98.6 and that from eqn 4.18 is 95.7. It is apparent that the Monte
Carlo results lie between the two analytic estimates but favor the geometric
4. Energy-Dependent Neutron Transport 97
average over the linear average. However, the analytic result accounts for a
fraction of a collision in calculating the average number of collisions
required to reach a certain energy, while the Monte Carlo calculation
scores an integral number of collisions required to reduce the neutron energy
to or below the cutoff energy. Hence we expect the Monte Carlo result to be
a fraction of a collision larger than any analytic estimate.
Thus, by selecting e and q> in the C system, and calculating the equivalent \jJ
in the L system, one can easily detennine the post-collision direction of a
neutron in the L coordinates for a pre-collision trajectory along the Z axis.
For an arbitrary pre-collision neutron direction, the conversion of the
post-collision direction from C to L coordinates is made in two steps. First,
the post-collision vector Oc in the C system is detennined based on the
assumption of incidence along the +Z axis. This is then converted into an
equivalent vector OL in the laboratory coordinates using eqns 4.13 and 4.22.
Finally, the vector is rotated to the final orientation OF by a coordinate
transfonnation that relates the +Z direction to the actual pre-collision
direction of the incident neutron in the L system.
To detennine the rotational transfonnation,a let A be a unit vector
oriented in the direction of the pre-collision neutron,
A = ui+ vj + wk (4.23)
z
z"
z'
~---------y
X Projection~
onto X-V plane
Figure 4.3. Coordinate Transformation by first Rotating the Z Axis Around the X Axis
Step 1. Rotate the Z axis by an angle e around the X axis so that the new
Z' axis is in the plane formed by the X axis and A. The primed coordinate
axes will now be
i' = i
j' = cosej - sinek (4.24)
k' = sinej + cosek
and we have
k'e(ixA)=O (4.25)
and we obtain
. e=
SIn
v
r:---? (4.28)
v1- u 2
w
cos e = r:---? (4.29)
v1- u 2
Therefore, the primed coordinate axes after this first rotation are defined by
i' =i
., w. v k
J = J--=== (4.30)
Jh7 Jh7
k ' == v . _=w==k
J+
Jh7 Jh7
Step 2. Rotate the Z' axis by an angle <p around the Y' axis so that Z" is
aligned with A. This gives
100 Chapter 4
."., w. v k
J =J = J--===
~1-u2 ~ (4.31 )
k" = ui + vj + wk
." ~1
1 ="I-u~l-
2. uv.J--===
uw k (4.32)
~1-u2 ~
."
1 =" ~1 2.
1- u~ 1-
uv.
~1-u2
J- uw
~1-u2
k
."
J =
w.
J-
v k (4.33)
~ ~1-u2
k" = ui + vj + wk
SID
. 8= ~
U
,,1-
v2
w
cos8= ~
"1- v 2
4. Energy-Dependent Neutron Transport 101
which results in
., w. u k
1 = 1---===
~~
j' = j (4.35)
k' = w
u .1+--=== k
~~
Step 2. Rotate the Z' axis about the X' axis so that Z" is aligned with A .
." . ,
1 =1=
u
w .1--=== k
~ ~1-v2
j" =kIf xi" (4.36)
kIf = ui + vj + wk
k "·"
XI =
uV·,-:---:)l
2•
,-:---:) 1 +"1/ 1- v- J -
vw k
,-:---:) (4.37)
"l/1-v 2 "l/1-v 2
The coordinate axes after the desired rotation are therefore given by
."
1 =
w .1---===
u k
~~
J." = - uv.1 + "1/~1 2· VW
1 - v- J - --===
k (4.38)
~ ~
k" = ui + vj + wk
Either ofthe above two rotational transformations, eqns 4.33 or 4.38, can
be used to convert from the post-collision coordinates with respect to the +Z
direction to the L coordinates. In our calculations the first transformation
I I
will be used unless u > 0.9, in which case the second transformation will
be used.
Given a post-collision direction nL, which is a unit vector in the
laboratory system assuming an incident direction in the +Z" direction, let
(4.39)
102 Chapter 4
The final vector n F is the same vector as n L but is expressed in the original
coordinate system. Let
(4.40)
Then
n F =nL
u F =nL -i=(i-i")u L +(i-j")v L +(i-k")w L
(4.41)
v F =nL -j=(j-i")u L +(j-j")v L +(j-k")w L
w F =nL -k=(k-i")u L +(k-j")v L +(k-k")w L
For the first rotational transformation defined in eqn 4.33 (the case where
Iu I ~ 0.9) this matrix is given by
~ 0 u
-uv w
Tx= v (4.43)
~ ~
-uw -v
w
~ ~
w -uv
u
~ ~
T=
y 0 ~ v (4.44)
-u -vw
w
~ ~
4. Energy-Dependent Neutron Transport 103
Our procedure for tracking isotropic scatter in the center of mass will be
to assume the initial particle direction is along the +Z axis. We obtain a
post-collision isotropic vector Oc in the C system and convert it to the vector
OL in the L system using eqns 4.13 and 4.22. However, the actual incident
direction is 00'
(4.45)
I I
Therefore if Uo ~ 0.9 we calculate the final orientation of the J?ost-
collision vector OF = Tx • !lo using the transfer matrix of eqn 4.43. If I Uo I
> 0.9 the calculation is performed using eqn 4.44, OF = T y • 0 0 ,
The results of several runs of the program are shown in Table 4.6. The
error estimates are given as fractional standard deviations; i.e., the standard
deviation divided by the estimated mean of the associated random variable.
The results confirm that the neutron backscatter probability after two
collisions with heavy targets is 0.5, and that the mean Z-direction cosine
after two collisions is zero. In addition, the program predicts that, for A = 1,
the mean Z-direction cosine after two collisions is about 0.446, while the
backscatter probability after two collisions is about 0.165.
For large A, the program appears to produce an erratic variance estimate
for the mean Z-direction cosine. The fractional standard deviation for these
results takes on values much greater than one, and does not show a
consistent decrease with the number of particles tracked. The erratic nature
of this estimator does not reflect a lack of convergence in the estimation
procedure, but rather the fact that the true mean of the estimated average
cosine is zero. As the accuracy of the estimate of this mean increases, the
fractional standard deviation becomes a poor way to present the uncertainty
in the mean. In fact, in the limit of a large number of particles, both the
mean and the standard deviation will vanish but, in practice, the fractional
standard deviation will not. For a null result it would be better to express
the uncertainty in the physical units of the estimator rather than as a fraction
of the mean.
. . Resu Its f;or Example
Table 46 I 42
Seed Start Target MeanZ- Frac std Backseat Frac
Particles MassA dir cos dey Prob std dey
II 103 I 0.457 0.029 0.165 0.071
12 104 I 0.445 0.009 0.166 0.022
13 10 5 1 0.446 0.003 0.166 0.007
21 103 100 -8.60-4 21.41 0.496 0.032
22 104 100 7.81-3 0.737 0.494 0.010
23 10 5 100 9.91-4 1.85 0.501 0.003
Using the coding developed in Example 4.2, along with that used in the
examples in Chapter 3, it is possible to simulate the transport of neutrons
through elastic scattering materials with physically correct angular and
energy dependence. However, even with this improved collision model we
have not addressed the question of the energy dependence of the interaction
cross sections. The variation of cross sections with energy, and the
correlation of the angular dependence of the post-collision particle direction
with energy, can be complicated. Angular- and energy-dependent cross
sections may be included in Monte Carlo calculations by using tabulated
106 Chapter 4
(4.46)
Here l; is the average logarithmic energy decrement given by eqn 4.21 and u
is the neutron lethargy,
E
u=ln-o (4.47)
E
U D
t(u} = f-du' (4.48)
o l;~s
(4.49)
8 The diffusion coefficient arises in the kinetic theory of gases, specifically from Fick's Law,
J = -D grad +.
The diffusion coefficient D is approximately equal to l/3~t. See A. M.
Weinberg and E. P. Wigner, The PhYSical Theory of Neutron Chain Reactors, University of
Chicago, Chicago, 1958, Chapter VIII, pp. 181 ff.
4. Energy-Dependent Neutron Transport 107
(4.50)
Thus the age of a neutron is equal to one-sixth of the mean square distance
from the point at which it was born to the point at which its age is 't,
(4.52)
We wish to determine the value of't for neutrons at the indium resonance
from a fission source in light water. A Fortran code for calculating this
quantity is shown in Tables 4.7 - 4.10. The user may set both the source
energy and the cutoff energy as input to the calculation. The input data are
read in subroutine 'Input,' shown in Table 4.8. An energy-dependent
version of subroutine 'Isocol' is shown in Table 4.9. For the present
example we are interested in using a fission source, which is activated by
setting the source variable to zero. For this purpose the Watt fission
spectrum is used beginning at statement 65 of Table 4.7. Source neutron
energies are selected from this spectrum using the rejection technique of
Kalos as described by Everett and Cashwel1. 4 Neutron slowing down in
water is almost entirely the result of elastic scattering interactions between
the neutrons and the hydrogen and oxygen nuclei of the water. Because the
absorption cross sections in these materials are small, in this example we
will neglect absorption.
To obtain a reasonable estimate of the slowing down density we will
need a model for the hydrogen and oxygen scattering cross sections. For
present purposes we have made a rough tabulation of the total cross sections
for Hand 0 from published values.s The cross section data bases are
contained in subroutines 'Hydrogen' and 'Oxygen,' shown in Table 4.10.
Relatively few points are included because the purpose is merely to
illustrate the concept of using tabulated cross sections without encumbering
the coding. The energy values 'epoint,'. and the microscopic cross sections
108 Chapter 4
The calculation follows the source neutrons until they down scatter below
the indium resonance (about 1.4 eV) or, in a calculation that uses a finite
geometry, they escape from the system. When they reach the cutoff energy
their radius is determined and the square of the radius is scored. A standard
variance estimate is made. To obtain a valid estimate for the age, the system
radius should be set large enough that the number of particles escaping from
the geometry is essentially zero. To verify that only a negligible number of
neutrons escape before reaching the cutoff energy, the number of neutrons
leaking from the system is scored. The age is then determined by eqn 4.52.
110 Chapter 4
lE+2 , . . . . - - - - - - - - - - - - - - - ,
Hydrogen
i
'Oil
lE+l
ilE+O r-----Oxy-sen-----....J
~
o
i lE-l
{!.
lE-2 '---'-~-~---.-""'"--~-"'""'----'--"""----'
1~1~1~1~1M1M1~1~1~1Ml~
Energy (eV)
Exercises
1 S. Glasstone and M. C. Edlund, The Elements ofNuclear Reactor Theory, D. Van Nostrand,
Princeton, NJ, 1952, pp. 143-45. Regarding the linear average of the number of collisions
required to reduce the energy of a neutron to a certain value, see the footnote on p. 145 of
this reference.
2 Ibid., p. 142.
3 Por a discussion ofFenni age theory see Ibid., pp. 172-89; M. M. R. Williams, The Slowing
Down and Thermalization ofNeutrons, John Wiley & Sons, New York, 1966, pp. 358 ff.
4 C. J. Everett and E. D. Cashwell, "A Third Monte Carlo Sampler," LA-9721-MS, Los
Alamos National laboratory, Los Alamos, NM, 1983, p. 129.
S Victoria McLane, Charles L. Dunford, and Philip F. Rose, Neutron Cross Sections, Vol. 2,
Neutron Cross Section Curves, Academic Press, New York, 1988, pp. 1-3,47-49.
Chapter 5
ready access to them in the various subroutines without modifying the calls
to the subroutines. As additional capabilities are added to PFC the lists of
variables in the common blocks can be expanded.
body 1 and are also in body 2. This is shown in Figure 5.5. Combinations
of logical operators are also allowed. For example, "(.Nor.l).AND.2"
consists of all points that are in body 2 but are not in body 1. This region is
illustrated in Figure 5.6.
1.0 em
~-+--+t~ 2.0 em
The general format for entering the geometry description using the
combinatorial package in PFC is shown in Table 5.4. In the PFC geometry
120 Chapter 5
package the last zone defined must surround the entire problem geometry.
This zone is assumed to consist of a total, or "black" absorber, and particles
that enter this final zone are treated as having escaped from the geometry. If
there is a path for particles to leave the geometry without entering the last
defined zone, tracking will usually fail with a resulting "Lost in Space"
message. The description for the geometry of Figure 5.7 is shown in Table
5.5.
process. An error may occur, for example, if the user places two bodies in
contact with one another by assigning them the same coordinate value. The
results of the calculation of entry and exit points for the two bodies may
round differently and an artificial gap may appear between the bodies. If
this gap is not defined to be in a zone, the code will not be able to follow a
particle that is passing between the bodies. In such a case the particle is
lost.
Tracking problems may also occur when a particle passes through the
corner of a body. In this case round-off errors in the computed distances to
the entry and exit points of the body may make it appear that the particle has
passed through the body before it has entered the body. The coding may
determine that the particle has left the current zone but may not be able to
find the particle in a subsequent zone. Again the particle is lost.
In general, particles should not be started on a geometric boundary
because in such a case the geometry package cannot identify the zone the
particle is in. This is unlikely to occur when using distributed sources, but
may occur when using discrete point sources. For example, if a point source
is placed on the interface between two slabs, an ambiguity similar to the
problem of tracking a particle through an artificial gap between bodies can
occur. The problem can usually be solved without loss of accuracy by
placing the source point a short distance into one of the zones.
One useful "trick" to reduce tracking problems is to define bodies such
that they overlap slightly, and to subtract those that are not in a given zone.
This eliminates any possible gaps between bodies caused by round-off
errors. It is also possible to define a large body that encloses the entire
problem and create a "catch-all" zone by subtracting the bodies that
constitute the real zones from this large body. If the material in this outer
body is defined as a vacuum there should be no change in the problem
results. Particles that otherwise would be lost will then merely pass through
a bit of vacuum before reaching the next zone. Despite these precautions,
however, it is rare for a particle to get lost unless there is an error in the
geometry description.
Two subroutines are used to initialize tally arrays and to prepare for the
random walk. Entry point 'Statone' in subroutine 'Stats' initializes arrays
used for storing event scores and the squares of these scores. The latter are
used to estimate the standard deviation associated with the various estimated
quantities. Subroutine 'Stats' is shown in Table 5.9. The coding at entry
point 'Statone' is similar to that used in prior examples, such as that shown
124 Chapter 5
exiting the current zone; i.e., before it "hits" a boundary. Given the results
of calculations indicating whether or not individual bodies are "hit" by the
particle track, and given the descriptions of the zones in terms of their
constituent bodies, the routine determines whether a track intersects a given
zone and, if so, the distance to the entry and exit points of that zone along
the particle flight path.
The track must intersect all of the positive bodies that constitute a zone -
i.e., those for which a point must be inside the bodies to be in the zone - in
order for the particle to enter the zone. The distance the particle must travel
before entering the zone cannot be less than the distance to enter the first
such body, while the distance to leave the zone cannot be more than the
distance to leave the last such body. Points must be outside all of the
negative bodies of a zone in order to be in the zone. If the track does not
intersect a negative body of a zone the calculation of entry and exit points
need consider only the positive bodies. However, if the track does intersect
a negative body, then the particle leaves the zone when it enters that body.
The logic of determining the bodies through which the particle track will
pass, and of calculating the distances to entry and exit of each of those
bodies, was used in Example 3.5. In PFC the calculation of the intersection
of the flight path with spheres is performed in subroutine 'Sph.' This
subroutine is shown in Table 5.15. The only difference between 'Sph' and
the program shown in Table 3.12 is the coordinate translation required in
order to include spheres whose centers are not at the origin. The geometric
logic underlying the calcul~tion was discussed in Section 3.7. The
subroutine for calculating the distance to entry and exit points for RPPs,
128 Chapter 5
Each zone that a particle enters along its flight path is related to the zone
the particle is leaving. As the problem proceeds, the coding in 'Hit' uses the
arrays defined at the end of subroutine 'Geomin ' to identify those zones
most likely to be encountered when leaving a given zone. Then, when
determining the zone to be entered when exiting this zone the subroutine
searches through the zones in the most likely order based on those already
found. This "learning" process will generally reduce the time required for
tracking particles in complex geometries.
Since a mistake may occur in the geometry description, it is possible for
particles to become lost due to this error. When this occurs subroutine ' Hit'
writes the message "Lost in Space" and calls subroutine 'Dump.' The
'Dump' subroutine, shown in Table 5.17, writes the geometry description
along with the last known location and direction of the particle, and then
stops the calculation. The information provided by 'Dump' can be used to
help determine errors in the geometry description.
As mentioned above, even when the user is confident that the geometry
has been correctly defined, particles may occasionally be lost. In this case
the user may choose to replace the 'stop' statement in subroutine ' Dump'
with a 'return' statement, and thereby allow the calculation to proceed even
though some particles have been lost. If the number of particles lost is
130 Chapter 5
negligible compared with the number of start particles one usually assumes
that the systematic bias caused by such lost particles is also negligible.
Using the flight path from 'Dist,' the physical distance to the boundary
from 'Hit,' and the total cross section from 'Mxsec,' the 'Walk' routine
determines whether the particle will suffer a collision within the current
zone or will escape from the zone. This test is performed on line 15 of
subroutine 'Walk.' If the path continues into the next zone the particle is
"stopped" at the boundary in order to process the boundary crossing.
Subroutine 'Bdrx,' shown in Table 5.19, is called by 'Walk' to process
boundary crossings. Although the library version of 'Bdrx' does not include
any scoring when a boundary crossing occurs, it can be updated to do so.
Examples of using 'Bdrx' for event scoring are shown in Chapter 7.
132 Chapter 5
If the next collision occurs within the current zone the tracking of the
particle flight path is complete and PFC is ready to process the collision.
Collision events are processed in subroutine 'Col,' shown in Table 5.20. A
collision may result in the particle being killed, the same particle emerging
with a new direction and energy, or secondary particles being produced.
When the random walk for all of the source particles is completed the
main program calls subroutines to calculate and print the results. Entry
point 'Statend' of subroutine 'Stats' (Table 5.9) is used to calculate the
standard deviations in the results stored in the library arrays 'bscore' and
'cscore,' and to print the results. In addition, subroutine 'Output' is
available to calculate and print any additional information desired. The
library version of subroutine 'Output,' shown in Table 5.21, obtains and
prints the ending random number seed. As with all routines in this
framework, the user can modify 'Output' to meet the requirements of any
particular problem.
identical to the library versions. The line numbers shown in the modified
subroutines of this and future examples are unchanged from the library
versions. Unnumbered lines contain changes and line numbers not listed
indicate lines that have been omitted.
The geometric description for this example using the PFC combinatorial
geometry routines is given in the file 'geom.txt,' shown in Table 5.23. Only
one sphere, centered at the origin, with a radius of 10 units, is needed to
describe the physical shapes. Zone one is defined as the interior of the
sphere. Zone two is defined as everything not in the sphere. Particles
therefore "escape" if they leave the sphere.
The cross section information for this calculation is given in the file
'xsects.txt,' shown in Table 5.24. Since only the first zone contains
scattering material and the size of the sphere is 10 mfp, the total cross
section is set to 1.0. The material is a pure scatterer and therefore the non-
absorption probability at each scattering event is also 1.0.
Table 5.24.
the collision point. It assures that each bin in the array ' cscore' corresponds
to 0.1 mfp. Once the correct bin has been found the weight of the particle,
'wate,' at the collision is added to the tally in this bin.
object file produced is linked with the PFC library to supply the final coding
for solving the problem. As expected, the results obtained by running this
modified version of PFC are the same as those of Example 3.1, with only
statistical variations because of different random number sequences.
Although much about PFC may appear new and different upon first
glance, most of the coding is closely related to the custom routines that were
developed to solve the examples in Chapters 1 - 4. In those examples we
developed specialized routines to track particles, to perform isotropic
scattering, to transform scattering angles from C to L coordinates, and to
perform other functions. PFC allows us to place these and other processes
that we will develop for use in Monte Carlo transport into a formalized
structure. None of the flexibility exhibited by the custom routines has been
lost since any or all of the coding in the PFC library routines can be changed
as needed. The routines are intended to be easy to understand, and the user
should be able to modify or extend the coding as required for specific
problems. We will expand the applicability of PFC to realistic transport
5. A Probabilistic Framework Code 137
Exercises
1. Repeat Example 5.1 using a sphere with radius 15 mfp. Compare the
Monte Carlo results to the diffusion theory solution (eqn 3.34.)
1 W. Guber, et aI., "A Geometric Description Technique Suitable for Computer Analysis of
Both the Nuclear and Conventional Vulnerability of Armored Military Vehicles," MAGI-
6701, Mathematical Applications Group, Inc., White Plains, NY, 1967.
Chapter 6
6.1 Introduction
In the examples presented thus far the sampling has been unbiased; i.e.,
the points at which the independent variable was evaluated were selected
from the distribution function that directly describes the sampling problem.
Although nature always plays unbiased Monte Carlo, the practitioner cannot
always do so. Calculations must produce an answer in a reasonable time
with a reasonable expenditure of effort. This may not be possible without
the use of some mathematical technique for reducing the variance.
Fortunately there are methods available that can assist the user in this
regard. Variance reduction, properly applied, can improve the results of a
Monte Carlo calculation without requiring an increase in the amount of
computational resources applied to the problem; i.e., variance reduction
methods can increase the efficiency of the calculation.
Any practical Monte Carlo result will have a non-zero variance
associated with it. As explained in Chapter 2, this variance can almost
always be reduced by increasing the number of samples taken from the
distribution being analyzed. However, the rate of reduction in the variance
obtained by this method, being generally proportional to the square root of
the number of samples, is slow. To reduce the variance in a sampled
quantity one usually makes use of a technique such as stratification, or of
importance sampling. The latter is sometimes referred to as "biasing."
Stratification involves the definition and use of sub-regions in the domains
of certain variables in the problem but does not change the distribution from
which samples are selected. Importance sampling, on the other hand,
139
140 Chapter 6
changes the distributions from which the samples are selected. The key
issue in making such changes is to leave the mean value of the result
unchanged while reducing the variance of the sample.
The theoretical basis for variance reduction using biasing was provided
in Section 2.5. There jt was shown that one can select from a modified
distribution function V'("'I ",fthe form given by eqn 2.92,
,( ) _ V(x)f(x)
(6.1)
V x - g(x)
1
e=-- (6.2)
Ta 2
Here T is the run time of a calculation and a 2 is the variance of the result.
Obviously this expression cannot be used to compare the results obtained
from different problems, or even from different estimated quantities in the
same problem. However, it provides a useful means for quantifying the
effect of applying alternative variance reduction techniques to a particular
result.
For problems in which the source particles are distributed over space,
energy, angle, or time, some source particles may be more likely to
contribute to a particular result than others. In such problems it is often
6. Variance Reduction Techniques 141
Table 6.1. Modified Subroutines for Example 6 I with Unbiased Source, No Stratification
Subroutine Location
'Source' Table 6.2
'Bdrx' Table 6.3
'Col' Table 6.4
'Stats' Table 6.5
Table 6.2. Subroutine 'Source' for Example 6.1 with Unbiased Source, No Stratification
SUBIOJI'INE saJR:E ! 1
REAL (8) x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ ! 2
OOMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 3
REAL (8) FL'mN
z=FL'mN () pick uniformly across slab
x=O.OdO; y=O.OdO starts particle aloog z-axis
nzcur-=l assures source is in zooe me 6
CALL ISCXlJI' directioo chosen isotrcpica11y 7
wate=1.OdO particle starts with a weight of me 8
REIURN 9
aID 10
Table 6.3. Subroutine 'Bdrx' for Example 6.1 with Unbiased Source, No Stratification
SUBIaJI'INE BOO)( ! 1
REAL (8) delta 2
REAL (8) x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ ! 8
OOMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 9
REAL (8) cInfp, dtr, xsec, <leur ! 10
OOMMON/TAACK/cInfp, dtr, xsec, <leur 11
REAL (8) bscore(10) , bsunsq (10) , cscore (10) ,cs\.l1\Sq(10) ,1::part(1O) , cpart: (10)
OOMMON/STIIT/bs=re,bsunsq,cscore,csunsq,I::part,cpart:
delta=dtr-dcur ! delta--distance traveled to read! boondary 12
cbJr=dtr ! up:late current distance traveled 13
ctnfp=dnfp-delta*xsec ! subtract current distance in mfp fran cInfp 14
z=z+w*delta update positioo, z-directioo ooly
IF(z.LE.O.OdO)THEN
l::part(l)=wate leaks left
ELSE
1::part(2)=wate leaks right
aIDIF
nzcur-=-l set nzcur=-l (for escape)
REIURN 19
aID 20
shown in Table 6.5. This subroutine accumulates statistics for the two cases
being scored in this problem.
Table 6.4. Subroutine 'Col' for Example 6.1, with Unbiased Source, No Stratification
SUBFa1I'INE CXlL 1
RFAL(8) FLTRN,delta 2
RFAL(8) x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ ! 3
OOMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 4
RFAL(8) cInfp,dtr,xsec,dcur ! 5
OOMMON!TRACKIcInfp,dtr,xsec,dcur 6
RFAL (8) sigt (20) , c (20) ; ! dirrensioos allCM up to 20 different rredia 7
OOMMON/GE01/sigt,c; !sigt is total cross secticn, c is ncn-absorpticn proo 8
delta=ctnfp/xsec ! distance traveled to collisicn 9
d~elta up:late total distance traveled 10
x=xo; y=yo; z=zo+w*dtr up:late positicn, z-directicn cnly
IF(fltm().Gl' .c(nzcur»THEN See if particle is killed in collisicn
nzcur=-1 set nzcur=-1 i f killed
REI'URN
ENDIF
CAI.J., IscaJr asS\.lteS isotrcpic scatter in lab system 20
REI'URN ! 21
END 22
Table 6.5. Subroutine 'Stats' for Example 6.1 with Unbiased Source, No Stratification
SUBroJI'INE Stats 1
RFAL(8) bscore(10) , bsumsq (10) , cscore (10) ,csumsq(10) ,!:part (10) ,cpart(10) 2
OOMMON/S'OO'Ibscore, bstJI\Sq, cscore, CStJI\Sq,!:part, cpart 3
OOMMON/IN/npart, nbatch 4
RFAL(8) x,y,z, u, v, w,xo, yo, zo,uo, vo,wo,wate, age, energ 5
OOMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 6
RFAL(8) tmp,tmpl,tmp2,var,stdev ! 7
ENI'RY Statcne ! entry point to initialize arrays for a:nplete prcblem 8
bscore=O.OdO; bsl.IllSCf"O.OdO; cscore=O.OdO; csumscrO.OdO ! 9
REI'URN 10
ENI'RY StatIp ! entry point to initialize arrays for a particle 11
I:part=O.OdO; cpart=O.OdO ! 12
REI'URN 13
ENI'RY StatEl.p ! entry point to store scores for a particle 14
bscore (1)=bscore (1) +bpart (1) store score 15
bsumsq(l) =bsumsq (1) +bpart (I) **2 ! store square for variance calculaticn 16
bscore(2)=bscore(2)+bpart(2) ! store score
bsumsq (2) =bsumsq(2) +bpart (2) **2 ! store square for variance calculation
REI'URN ! 19
ENI'RY StatEnd. ! entry point to calculate and print results 20
~DF'lG\T (npart) 21
~q(I)/tmp-(bscore(I)/tmp)**2 variance of left distr
stdev=~ (var) standard deviaticn of left distr
WRITE(16,*)bscore(I)/tmp,stdev/~(tmp) result and std dev of result 24
var=bsumsq(2)/tmp-(bscore(2)/tmp) **2 variance of right distr
stdev=~(var) standard deviaticn of right distr
WRITE(16,*)bscore(2)/tmp,stdev/~(tmp) result and std dev of result
REI'URN 28
29
The geometry description for this problem is shown in Table 6.6. The
geometry consists of a single RPP that is one dimensional unit thick in the Z
direction. This unit dimension was chosen to match the sampling used for
the spatial source distribution in the modified subroutine 'Source;' i.e., the Z
coordinate of a start particle is selected for 0 < z < 1, and thus the Z limits of
144 Chapter 6
the RPP used to define zone 1 are [0,1]. Because we consider only the Z
coordinate of the particle flight path, the dimension of the RPP in the x-y
plane should be irrelevant. However, in order to make use of our standard
geometry input and tracking routines, these lateral dimensions must be large
enough to prevent particles from escaping from the "sides" of the slab. For
convenience we assume all particle free flights start on the Z axis but we
include all three direction components in computing the collision point.
From the Appendix we know that the smallest number that can be
obtained with the random number generator we are using is [2 31 - 1]-1 ::::: 4.7
x 10-10 • By eqn 2.14, this number corresponds to a flight path of
approximately 21.5 mfp. This is the greatest distance a particle can travel in
a single flight in our calculation. Thus, since in subroutine 'Col' after each
flight path the particle is returned to the Z axis, if the lateral dimensions of
the slab are set greater than 21.5 mfp the particles cannot escape along the X
and Y axes and the slab is effectively infinite in these directions. We have
chosen to define the maximum (x,y) extent of the slab to be 50 units of
distance, which for ~t ~ 21 .5/50 ensures that no particles escape out the
sides of the slab. The input for the cross section data has been used to
define the thickness of the slab in mfp by setting ~ = 10, which means a unit
(1 cm) thickness corresponds to ten mfp.
Executing the code described in Tables 6.2 - 6.5, with the geometry of
Table 6.6, c = 0.5, and ~ = 10, provides separate estimates of the probability
of source particles leaking from the left and right faces of the slab. The sum
of these quantities constitutes the desired result. Because of symmetry, the
two partial results should be equal, and by comparing them we can get some
indication of the accuracy of the calculation. The results obtained from
running 107 source particles with this analog version ofPFC, which required
102 sec of execution time, are 0.04258 and 0.04266 for the left and right
leakage probabilities, respectively. The standard deviations estimated for
these results are 6.38 x 10-5 and 6.39 x 10-5, and thus by eqn 6.2 the
efficiency of this analog calculation is about 2.4 x 106 • The right and left
leakage estimates agree to within just over one standard deviation.
However, by using only a single zone in the slab we have no data on the
6. Variance Reduction Techniques 145
spatial dependence of collisions and particle tracks within the slab. Thus we
have forgone information on the sampling of phase space within the slab in
favor of a simple geometry. The separate scores for the left and right
leakage compensate somewhat for this shortcoming, but in a real problem
we might prefer to add some spatial resolution to help evaluate the
thoroughness with which we have sampled the important regions of the slab.
To begin the application of variance reduction to this problem we will
first consider the use of stratification based on source location and direction.
The list of the modified subroutines to be used in this stratified calculation
is given in Table 6.7. We have arbitrarily selected ten equal-width spatial
and ten equal-cosine-width angle bins in which to score the results. As
before, we will proceed by selecting a source particle location randomly
over the width of the slab. However, this time we will score the result ofthe
particle leakage based on the bin in which it was born. Thus we will
produce a matrix of 200 scores - one hundred estimates of the left leakage,
one for each source bin, and the same number for the right leakage.
make use of the total leakage by spatial bin later, we have included in
subroutine 'Stats' a calculation of the sum of the leakages over the angles.
These results are stored in the arrays 'zstratb' and 'zstratc' for the left and
right leakage, respectively. The variances in these totals, 'zstratb2' and
'zstratc2,' are calculated and written to the output file.
Table 6.8. Subroutine 'Source' for Example 6.1 with Source Stratification
SUBRCUI'INE sana 1
REAL(8} x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ 2
COMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 3
REAL(8} FLTRN
REAL(8} bscore(10, 10) ,bs\.IlIsq(10, 10) ,cscore(10, 10} , csumsq (10, 10}, &
bpart(10},cpart(10}
COMMON/STAT Ibscore, bsumsq, cscore, csumsq, bpart, cpart, &
nsamp(10,10},nsampb(10,10},nsaropc(10,10}
COMMON/EXAMPLE/n,rn
nzcur=l source is in zooe ooe
z=FLTRN(} pick unifoIInly across slab
x=O.OdO; y=O . OdO starts particle aloog z-axis
CALL ISCXJJr directioo chosen isotrcpically 7
wate=1.OdO particle starts with a weight of ate 8
rn=INT(5.OdO*(w+1.0d0»+1 index for scoring by angle
IF (rn.GI' . 10}rn=10
n=INT(10 . OdO*z}+1 index for scoring by locatioo
IF(n.GI'.10}n=10
nsanp(n,rn}=nsarrp(n,rn}+1 nl.llber of particles in each bin
REl'URN 9
Em 10
Table 6.9. Subroutine 'Bdrx' for Example 6.1 with Source Stratification
SUBroJI'INE 8m{ 1
REAL(8} delta, score
REAL (8) x, y, z, u, v, w, xo, yo, zo, uo, vo, wo, wate, age, energ ! 8
COMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 9
REAL(8} ctnfp,dtr,xsec,cX:ur ! 10
COMMON/TEW::K/ctnfp, dtr, xsec, cX:ur ! 11
REAL(8} bscore(lO, 10} ,bs1.Insq(lO,10} ,cscore(10,10} , csumsq (10, 10}, &
bpart(10),~(10}
COMMON/STAT/bscore, bsUl5q, cscore, csUl5q, bpart, cpart, &
nsanp(10,10},nsampb(10,10},nsaropc(10,10)
COMMON/EXAMPLE/n,rn
delta=dtr-dcur delta~stance traveled to reach boondary 12
cb.lr=dtr update current distance traveled 13
~lta*xsec subtract current distance in rnfp fran ctnfp 14
z=z+W*delta update positioo in z-directioo
score=wate/100 . OdO since bin should contribute 1/100 of source
IF(z.LE.O.OdO}THEN
nsampb(n,rn)=nsampb(n,rn)+1 leaks left
bscore (n, rn) =bscore (n,rn) +score store score
bsumsq (n,rn) =bsumsq(n,rn} +score**2 store square for variance calculatioo
ElSE
nsanpc(n,rn}=nsarrpc(n,rn)+1 leaks right
cscore (n,rn}=csoore (n,rn}+score store score
csurnsq(n,rn}=csumsq(n,rn}+score**2 store square for variance calcu1atioo
EmIF
nzcur=-l set nzcur=-l if in outer zate (for escape) 18
REl'URN ! 19
END 20
6. Variance Reduction Techniques 147
Table 6. 10. Subroutine 'Stats' for Example 6.1 with Source Stratification
SUBFOJTINE Stats 1
RE1IL(8) tnp, tnpl, t:np2, var, stdev, totalsb, totalsc, tvarl, tvar2
RE1IL(8) bscore (10,10) ,bsl.l!'req(lO,lO) , cso:>re (10, 10) ,csumsq(lO, 10), &
bpart(lO),cpart(lO),zstratb(lO),zstratb2(lO),zstratc(lO),zstratc2(lO)
CXlMN/SI'AT/bso:>re,bsunsq, cso:>re, csl.ll'5q, bpart, cpart, &
nsamp(lO,lO),nsampb(lO,lO),nsampc(lO,lO)
ENI'RY Statcne ! entry point to initialize arrays for carplete prd:>lan 8
bscore=O.OdO; bsunsq=O.OdO; cso:>re=O.OdO; csunsq=O.OdO 9
nsanp=O; nsarq::b=O; nsanpc=O
zstratiFO. OdO; zstratb2=O. OdO; zstrat~. OdO; zstratc2=O. OdO
REI'UPN 10
ENI'RY StatIp entry point to initialize arrays for a particle 11
REI'UPN 13
ENl'RY StatEIf> entry point to store scx>res for a particle 14
REI'UPN 19
ENl'RY StatEnd. entry point to calculate and print results 20
totalsb=O.OdO; totals~.OdO; tvarl=O.OdO; tvar2=O.OdO
ro 200 i=I,lO
ro 200 j=I,10
var=O.OdO
k=nsarrp (i, j) ; tllp=DElllP.'I' (k)
IF(k.GE.l)THEN !Scores for left results follC1N *****
var=bsl.l!'req(i,j)/tmp-(bso:>re(i,j)/tnp) **2 Variance by bin
totalsb=totalsb+bso:>re (i, j) /tnp overall currulative scx>re
zstratb(i)=zstratb(i)+bso:>re(i,j)/tnp CUrulative scx>re in stratun i
bso:>re(i,j)=bscore(i,j)/tnp Score by bin
tvar1=tvarl+var/tnp Variance of overall a.mulative scx>re
zstratb2(i)=zstratb2(i)+var/tnp Variance of stratun i scx>re
bsl.ll'5q(i,j)=var/tnp Store variance of bin scx>re
var=O.OdO !Scores for right results follC1N *****
var=csumsq(i,j)/tmp-(cso:>re(i,j)/tnp) **2 ! Variance by bin
totalso=totalsc+cso:>re (i, j ) /tnp overall a.mulative scx>re
zstratc(i)=zstratc(i)+cso:>re(i,j)/tnp Ctm.llative so:>re in stratun i
cso:>re(i,j)=cso:>re(i,j)/tnp Score by bin
tvar2=tvar2+var/tnp Variance of overall C\.IlUlative so:>re
zstratc2(i)=zstratc2(i)+var/tnp Variance of stratun i scx>re
csl.ll'5q (i, j )=var /tnp Store variance of bin scx>re
ENDIF
200 <XNI'INUE
stdev=DSQRT(tvar1) standard deviaticn of distr
WRITE(16,*)totalsb,stdev result and std dev of result
stdev=DSQRT (tvar2) standard deviaticn of distr
WRITE(16,*)totalsc,stdev result and std dev of result
0PEN(UNIT=18,file='output.txt')
ro 300 i=I,10
ro 300 j=I,10
WRITE(18,*)nsamp(i,j),cso:>re(i,j),csl.l!'req(i,j),nsampc(i,j)
300 CCNrINUE
WRITE(18,*) (zstratb(i),zstratb2(i),DSQRT(zstratb2(i»,&
zstratc(i),zstratc2(i),DSQRT(zstratc2(i»,i=1,10)
REI'UPN 28
END 29
The results for 'zstratc' and the associated variances from a calculation
using this stratified example with 107 start particles are shown in Table 6.11.
As expected, the major contribution to the variance in the right leakage
estimate is from scores for particles starting in the three rightmost spatial
bins; i.e., from strata numbers 8, 9, and 10. It would appear from these
results that selecting more start particles from the spatial bins near the right
148 Chapter 6
surface than from the bins near the left or center of the slab would reduce
the variance in the right leakage and improve the efficiency of the
calculation. By symmetry, a similar statement could be made about the
leakage through the left surface. Therefore, because the total leakage is the
sum of the left and right leakages, one should be able to improve the
accuracy of the sum by starting particles preferentially near the surfaces of
the slab.
Table 6.11. Selected Output Data for Example 6.1 with Source Stratification
Spatial Stratum Leak Right Probability Leak Right Variance
1 2.9936E-6 2.9869E-13
2 9.6002E-6 9.S982E-13
3 2.1176E-S 2.I144E-12
4 S.6987E-S S.6998E-12
5 1.470 1E-4 1.4654E-1I
6 4.0062E-4 3.9720E-1I
7 1.0710E-3 1.0449E-10
8 3.0157E-3 2.8344E-1O
9 8.7468E-3 7.3295E-10
10 2.9IS6E-2 1.5494E-9
particles selected for that stratum and hence do not need to modify the
weight of the start particles in order to playa fair game.
Table 6.12. Modified Subroutines for Example 6 I with Source Biasing and Stratification
Subroutine Location
'Col' Table 6.4
'Bdrx' Table 6.9
'Stats' Table 6.10
'Source' Table 6.13
Table 6.13. Subroutine 'Source' for Example 6.1 with Source Biasing and Stratification
SUBroJI'INE SCXJR::E 1
REAL (8) x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ ! 2
CCMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 3
REAL (8) FLTRN,r
REAL (8) bscore(10, 10) , bsunsq (10, 10) , cscore (10, 10) ,csumsq(10, 10), &
bpart(10),cpart(10)
CCMMON/~T/bscore,bsunsq,cscore,csumsq,bpart,cpart,&
nsamp(10,10),nsampb(10,10),nsampc(10,10)
CCMMON~/n,m
DIMENSICN ibias (3, 22)
~~ ibias/0,0,4,0,0,4,0,0,4,0,0,4,&
1,4,3,1,4,3,1,4,3,2,7,2,2,7,2,3,9,1,4,10,1,5,11,1,6,12,1,&
7,13,2,7,13,2,8,15,3,8,15,3,8,15,3,&
9,18,4,9,18,4,9,18,4,9,18,4/
nzcur=1 ! origin is in zooe ooe
r=FLTRN () *22. OdO
i=INr(r)+l
z=((r-D~T(ibias(2,i»)/D~(ibias(3,i»+D~T(ibias(1,i»)/10.OdO
x=O.OdO; y=O.OdO ! starts particle along z-axis
CALL lso:ur direction chosen isotrcpically 7
wate=l.OdO particle starts with a weight of one 8
IIFINr(5.0d0*(w+1.0d0»+1 index for scoring by ang1e
IF(rn.GI' .10)1IF1O
n=INr(10 .OdO*z)+1 index for scoring by location
IF(n.GI'.10)n=10
nsamp (n, rn) =nsamp (n, rn) +1 nuTi:Jer of particles in each bin
REl'URN 9
END 10
The left and right leakage results obtained from the three different
methods of solution of this example problem - analog, unbiased stratified,
and spatially biased stratified - are shown in Table 6.14. The efficiencies
for the three calculations, based on eqn 6.2, are shown in the last column of
the table. In each case 107 start particles were used. Unbiased source
stratification is found to increase the efficiency of the leakage calculation by
about 46% compared with the analog calculation. However, more
importantly this stratified source calculation provides the additional benefit
of generating data that can be used as the basis for selecting reasonable
spatial source-biasing parameters, as incorporated in the third calculation.
Using a simple spatial biasing scheme in combination with stratification
increases the efficiency of the calculation by an additional 54% compared
with the analog baseline. Thus we see that the use of both stratification and
150 Chapter 6
where La is the absorption, and Lt the total, cross section. With survival
biasing particles get lighter in weight as they experience collisions in
materials for which the non-absorption probability is less than one. When
multiply-collided particles can contribute significantly to the desired answer
this technique can decrease the variance of the solution. The disadvantage
of survival biasing, when used in isolation, is that significant computation
time can be expended tracking particles that have very small weights and
therefore make small contributions to the result. This is shown in the
following example.
(6.4)
6. Variance Reduction Techniques 155
If the particle survives Russian roulette it is assigned the weight wA. Since
the probability of the particle surviving is equal to w/wA, and the game is
played only if w < WL, the ratio WUWA controls the probability with which a
particle survives the game. If this ratio is small - i.e., if WL « WA - few
particles subjected to Russian roulette will survive. This may be undesirable
since many light-weight particles that have been tracked to some point in
phase space, perhaps laboriously, could be killed indiscriminately and rarely
replaced with a heavy particle. Therefore it is customary to set wA within an
order of magnitude of WL.
The real utility of Russian roulette is that the values of WL and wA can
vary throughout the geometry in any manner the user desires. Thus average
particle weights can differ in different parts of the problem geometry and the
floor below which particle weights cannot continue to decrease can vary
with location, energy, direction, or any other problem variables. This is
often desirable because as particles move from the source to the detector it
is usually better to use several, gradual changes in weight to smooth out the
scores rather than to use one large change.
Table 6.20. Modified Subroutines for Example 6 3 with Simple Russian Roulette
Subroutine Location
'Bdrx' Table 6.3
'Stats' Table 6.5
'Source' Table 6.17
'Col' Table 6.21
Table 6.21 . Subroutine 'Col' for Example 6.3 .with Simple Russian Roulette
SllBIU1l'INE <XlL 1
REAL (8) FLTRN,delta 2
REAL (8) x,y, z,u, V,W,'XD, yo, zo,uo,vo,~,wate,age,energ ! 3
~/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,~,wate,age,energ,nzcur,newzn,ngroup 4
REAL (8) ctnfp, dtr, xsec, dcur ! 5
~/TAACK/ctnfp, dtr, xsec, dcur 6
REAL (8) sigt(20),c(20); ! dinensioos allCM up to 20 different nedia 7
~/GEX:M/sigt,c; !sigt is total cross sectioo, c is noo-absorptioo prd:> 8
delta=dnfp/xsec ! distance traveled to collisioo 9
dtr=<k::ur-tdelta ! update total distance traveled 10
x=xo; y=yo; z=zo+w*dtr update positioo, z-directioo ooly
wate=wate*c(nzcur) reduce wate by non-absorptioo prd:>ability 12
IF(wate.LT.O . IdO)THEN i f wate small play Russian roolette 13
IF(wate.LT.FLTRN()THEN i f particle killed by RR 14
nzcur-=-1 set nzcur-=-1 to shCM particle killed 15
REIURN 16
ENDIF 17
wate=1.OdO particle survived RR, increase wate 18
ENDIF 19
CALL rsa:ur assurres isotrcpic scatter in lab systan 20
REIURN ! 21
END 22
the particle weight falls below the specified Russian-roulette value, or the
weight of the particle is increased to the survival value. Either way we
impose a lower limit on the particle weights and reduce the average length
of the particle random walks. By allowing lower weights near the detector
than far from the detector we kill light-weight particles that are produced far
from the detector but track such particles when they are produced near the
detector.
Table 6.22. Modified Subroutines for Example 6.3 Using Spatially Dependent Russian
Roulette
Subroutine Location
'Bdrx' Table 6.3
'Stats' Table 6.5
'Source' Table 6.17
'Col' Table 6.23
Table 6.23. Subroutine 'Col' for Example 6.3 Using Spatially Dependent Russian Roulette
SUBroJI'INE COL 1
REA.L(8} FLTRN,delta,WA,WL
REA.L(8} x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ ! 3
OOMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 4
REA.L(8} dnfp,dtr,xsec,dcur ! 5
OOMMON/TAACKIdnfp,dtr,xsec,dcur 6
REA.L(8} sigt(20},c(20}; ! d:irrensioos a1l~ up to 20 different rredia 7
OOMMON/GEn1/sigt,c; !sigt is total cross section, c is non-absorption proo 8
delta=dnfp/xsec ! distance traveled to collision 9
dtr=dcur+delta ! Ilfdate total distance traveled 10
x=xo; y=yo; z=zo+w*dtr Ilfdate position, z-ill.rection only
wate=wate*c(nzcur} ! reduce wate by non-absorption prcbability 12
WA=4.OdO*DEXP(-z*10.0} Use -z*2.302585, -z, or -z*10.0
WlA'IA/ 4. OdO
IF (wate.LT.WL) THEN if wate small play Russian roulette
IF(wate/WA.LT.fltm(}}THEN i f particle killed by RR
nzcur=-1 set nzcur=-1 to sh~ particle killed
REl'URN
ENDIF
wate=WA particle survived RR, increase wate
ENDIF
CALL ISCXUI' assurres isotrcpic scatter in lab system 20
REl'URN ! 21
END 22
8 The run times shown here provide only a rough indication of the changes in efficiency
resulting from the variance reduction techniques. Other factors, such as the optimization of
the executable program by the compiler, can result in changes in the run time and it is
difficult to achieve precise quantification of the changes produced by a particular technique.
Therefore, small changes in the run time may not reflect actual changes in efficiency.
6. Variance Reduction Techniques 159
that the path from the boundary to the first collision point does not
contribute much to the variance of the result. Furthermore, for thin regions,
playing the game only after collisions will eliminate the expenditure of
computer resources playing Russian roulette on particles that have a high
probability of passing through the region without suffering a collision.
6.5 Splitting
exceed the upper weight limit, they may be split again, until the new weight
w' < WHo Alternatively the particle may simply be split into [[W/WH]] + 1
particles (where [[ ]] refers to the integer part of the quantity in brackets)
and the incoming weight shared among the split particles. Each of the new,
light-weight particles can then be tracked in tum.
Let us assume we wish the weight of split particles to be the same as the
Russian roulette survival weight. If w > WH the particle is split into as many
particles of weight WA as possible. For example, n particles of weight WA
will be created if nwA ~ WH < (n+ l)wA. If the remaining weight Wr = WH -
nwA is greater than zero but less than WL, one can play Russian roulette on
this "particle." If WL < Wr < wA, one may simply assign the particle the
weight Wp That is, one can split off particles of weight wA from the parent
particle until the remaining weight is less than WHo The last particle is then
given the residual weight. Alternatively, one can kill the particle with the
probability PIc, with the weight w in eqn 6.4 set to wr, and assign the weight
w A to the surviving particles. All of these options constitute a fair game and
the choice of which method to use is up to the user.
If splitting occurs at a boundary crossing, the new, split particles are
given phase space properties (energy, location, direction of travel) that are
identical to the parent particle. If particles are split at collision points one
may assume the splitting occurs either before or after the collision. Thus the
split particles can emerge from the collision traveling in either the same or
different directions. Choosing whether the splitting should occur before or
after the collision depends on the problem and the result that is being
sampled. Usually the direction in which the particle leaves the collision
point is important in determining its contribution to the answer of interest
and it is better to select from different scattering directions rather than
assigning the same direction to all of the split particles.
We introduce a new feature into the practical problem of executing
random walks when we include particle splitting. Up to now we have
initiated particle random walks only by selecting particles from a source
distribution. With splitting we have a set of particles that are "new" to the
calculation, and that we must treat essentially as start particles, but that are
not produced by selection from the source distribution. When we are ready
to track these split particles we must have a way to obtain their location,
energy, direction, and other relevant data. One way to do this is to save, or
"bank," them and retrieve their phase-space coordinates when needed. To
do this we will introduce the concept of a particle bank, in which untracked
particles are stored while other calculations are made. The banked particles
will be selected and tracked in some appropriate order, and the calculation
will be complete only when all start particles and banked particles have been
tracked.
6. Variance Reduction Techniques 161
Particle banks are useful for purposes other than storing split particles. If
we wish, we can generate all of our start particles at one time, store them in
the bank, and recall them from the bank as needed. Split particles and
secondary particles that arise in the particle walks can be added to the bank
as they are produced, and can be tracked later. Particles can be placed in the
bank at any time and for any reason, and removed from the bank for
subsequent tracking as desired. Thus it is not necessary to complete a track
before starting another particle. For example, neutrons might be tracked
until they produce a fission event. Each fission event can be considered to
produce a new generation of neutrons, and the phase-space coordinates of
all such fission neutrons can be banked until the tracks of all neutrons in the
previous generation have been terminated. A new calculation can then
proceed with the tracking of the new generation of neutrons. We will see
examples of this in Chapter 8.
The particle bank must include all of the phase space information
associated with each particle. As we have seen, this consists of at least six
variables or, for convenience in programming, seven variables - u, v, w, x,
y, Z, and E. Later we will add the time variable t to this list. It is often
desirable to include additional information to the bank, such as whether the
particle is part of the initial source or is a split or secondary particle, the
material or zone in which the particle is located, other geometry information
relevant to the particle and, last but not least, the particle weight. When
results are calculated per start particle it is necessary to have sufficient
information in the bank to ensure that the scores from split particles are
tallied together with the parent particle.
If we wish to track many particles we can see that, if we use the particle
bank to store all start particle information at the same time, plus allow for
split and secondary particles, the size of the bank can quickly become a
limiting factor in the calculation. One way to restrict the size of particle
banks is to divide the problem into groups, or "batches" of particles, and to
use the particle bank to store data for only one batch at a time. As we will
see later, instead of treating individual particles as the basis for variance
estimates it is possible to use such batches as the basis for statistical
analysis. Batches can also be useful in certain types of eigenvalue
calculations.
The option for banking split particles that will be used here is to store the
parent particle phase-space coordinates and weight in the bank. When the
code is ready to retrieve a particle from the bank it will retrieve this particle
and compare its weight with WHo If the weight is greater than WH an amount
wA will be subtracted from the weight of the banked particle and assigned,
along with the appropriate phase space parameters, to the next particle to be
tracked. The weight of the banked particle will then be reduced by the
amount wA. When the weight of the banked particle falls below wA the final
162 Chapter 6
split particle will be assigned this residual weight. Using this technique a
single bank location can be used to store each particle that is to be split.
The weight window of each region can be defined as narrow as desired;
however, importance sampling suffers from diminishing returns. The
implementation of variance reduction methods requires user effort to define
the parameters, and computer resources to execute the procedures. The
reduction in variance produced by such measures must therefore be balanced
against this expenditure of resources. At some point the return in reduced
variance per unit of resources devoted to importance sampling will begin to
decline and the user normally will decide that enough is enough. Generally,
at that point one will simply run the calculation long enough to obtain the
answer to the desired accuracy.
(6.7)
The latter values ensure the splitting of particles that penetrate most of the
slab while suffering only a few collisions, and provide increased sampling
of the space close to z = I . Since we will bank all of the parent particle
6. Variance Reduction Techniques 163
weight into one bank location, and split off particles with the desired weight
as required, only a few bank positions should be needed.
A listing of the modified subroutines for this example is given in Table
6.25. Subroutine 'Walk' (Table 6.26) includes a test following each particle
random walk to determine whether split particles have been added to the
bank. If so, these particles are processed before the next source particle is
chosen. The coding added between lines 21 and 22 determines whether split
particles need to be processed. In this case the entry point 'Bankout' ofthe
new subroutine 'Bankin' is called to retrieve a split particle. The coding
between lines 6 and 7 provides the data needed to manage the particle bank
and to initialize some variables. Finally, the coding between lines 9 and 10
resets the number of split particles to zero at the beginning of each source
particle track.
Table 6.25. Modified Subro utmes tlor ExampIe
I 64
Subroutine Location
'Source' Table 6.17
'Walk' Table 6.26
'Bdrx' Table 6.27
'Col' Table 6.28
'Bankin' Table 6.29
'Stats' Table 6.30
current results are almost identical to those obtained previously. The effect
is small because splitting is a rare event in this calculation. Although 55.5%
of the total weight of all start particles is involved in Russian roulette, only
0.0023% of the weight is involved in splitting. The present results therefore
do not provide much information about the effect of splitting on the
efficiency of a Monte Carlo calculation. However, splitting can be
important when used with biasing schemes that can cause particle weights to
increase significantly in regions of interest.
able 6.31. Results for Example 6.4 with 107 Start Particles, lO-mfp Slab, c=O.5
Run time Reflected I> Transmitted I>
Analog 75 0.115l±1.0-4 1.3+6 1.35-4± 3.7-6 9.85+8
RR,IO-z 71 O.ll50± 1.9-4 4.1+5 1.27-4± 3.1-6 1.48+9
RR + Split 70 0.1l50± 1.9-4 4.1+5 1.26-4± 3.1-6 1.53+9
corresponding to about one mfp may require a large number of regions, and
the problem efficiency may be decreased by imposing excessive boundary
crossings and importance sampling calculations. Some form of compromise
is usually required.
(6.8)
(6.9)
Here ~ is a random number evenly distributed between zero and one. Let us
define an artificial interaction probability p* such that
(6.10)
where
That is, we artificially modify the total cross section by an amount g, where
g can be positive or negative and can depend on location, energy, and
6. Variance Reduction Techniques 169
• -ln~
x =-- (6.12)
L·t
If g is greater than zero the average of the modified flight paths x* will
be longer than the average of the unmodified flight paths x, and the paths
will have been "stretched." Alternatively, if g is less than zero the average
of the modified flight paths will be shorter than the average of the
unmodified flight paths, and the paths will have been shortened. To
maintain a fair game, the particle weight w must be adjusted to compensate
for the biased selection of shorter or longer than average flight paths. Since
by eqn 2.92 we require
(6.14)
Thus if g is greater than zero we will have w* < w for large x and w* > w
for small x. A fair game is maintained because the probability of a given
weight going a given distance is the same in the biased as in the unbiased
case. As positive g approaches Lt the magnitude of weight change resulting
from this biasing will increase without bound, which is why it is necessary
to restrict g < Lt.
Because Lt varies with both particle energy and the material through
which the particle is traveling, it is usually inconvenient to require the user
to specify g as an input parameter. Instead one generally defines a
normalized exponential transform parameter p,
(6.15)
such that p varies over the range [-1,1] as g varies between [-LbLtl. The user
may then define the exponential transform by the parameter p and does not
need to know Lt. Since p < 1, we can define a quantity B by
170 Chapter 6
B=_I_=!t (6.16)
I-p E;
Then
(6.17)
and
x· =Bx (6.18)
(6.19)
(6.20)
where Po< 1. Thus, on the average, particles traveling towards the detector
will have their flight paths stretched and those traveling away from the
detector will have their flight paths shrunk. The maximum value of the
stretching parameter will be Po, and this value will be obtained only when
the flight path is pointed either directly towards (p = Po) or directly away
from (p = -Po) the detector. No modification is made to the flight paths of
those particles traveling at right angles to the direction between the collision
point and the detector.
Let us again address the problem of particles penetrating a thick slab, but
this time let us add exponential transfonn to our arsenal of variance
reduction techniques. As before we will assume the particles are
monoenergetic and undergo isotropic scatter in the laboratory coordinates.
We will assume a unit slab thickness and use the cross section to set the
thickness in mfp. We will leave the Russian roulette and splitting
6. Variance Reduction Techniques 171
parameters set for the most part as in Example 6.4, and add directional-
dependent exponential transform using a modified form of eqn 6.20. That
is, since we seek only to determine the transmission through the slab and not
the radial position of the particles inside the slab, we will set the stretching
parameter P = wpo, where w is the Z- direction cosine for the post-collision
particle track. The maximum value of the transform variable Po will be
constant throughout the slab.
A listing of the modified subroutines used for this example is given in
Table 6.32. The modification of the flight path for exponential transform
based on eqn 6.12 is performed in subroutine 'Dist.' This is shown in Table
6.33. The value of p (the variable 'rho') is set in this subroutine. The
particle weight is then modified using eqn 6.19. A modified version of
subroutine 'Col' is shown in Table 6.34 This subroutine is the same as that
used in Example 6.4 except the Russian roulette parameters have been
changed to values appropriate to the p values used in the problem. In the
example calculations, the weight that would emerge from a first collision at
a given point is chosen as the wA value at that point. Since all source
particles start in the positive-Z direction, they have the maximum
exponential transform path stretching parameter Po applied. The weight
emerging from the first collision is the entering weight times the non-
absorption probability. Therefore, using eqn 6.19, WA is given by
WA- c e-I,poz
--- (6.21)
1- Po
The results of running the above code, for a 10-mfp slab using a non-
absorption probability of 0.5 and 106 start particles, are shown in Table 6.35.
In this table the reflected and transmitted fractions are given for values of Po
from 0.5 to 0.99. As might be expected, we see that using an exponential
transform to stretch the particle flight paths towards the downstream, or
172 Chapter 6
right, face of the slab reduces the efficiency of the reflection calculation.
However, the use of path stretching in this case is intended only to improve
the efficiency with which the transmission probability is calculated. From
the table it is apparent that this result has been achieved for all values of
Po shown here, with a maximum gain in efficiency of roughly a factor of 200
compared with the best result from Example 6.4 (1.53+9). The results show
that the efficiency reaches a maximum near Po = 0.9, although even the
extreme value of 0.99 gives a greater efficiency than the nominal value
usually recommended for path stretching of Po = 0.5. For this simple
example of a thick slab in one dimension the use of the exponential
transform clearly has much greater value than the survival biasing, Russian
roulette, and particle splitting used previously.
r.able 6.35. Results for Example 6.5 for 106 Start Particles, 10-mfp Slab, c=O.5
Po Run Reflected e Transmitted e
time
0.5 25 s 0.1155± .000318 4.0+5 1.282-4± 9.45-7 4.5+10
0.8 25 s 0.1143± .000586 1.2+5 1.281-4± 3.86-7 2.7+11
0.9 21 s 0.1143± .000888 6.0+4 1.276-4± 3.87-7 3.2+11
0.99 7 s 0.1140± 0.00279 1.8+4 1.288-4± 1.26-6 9.0+10
Exercises
7.1 Introduction
'I'(r,n,E) =
""
Je-Il [ S(r',n,E) + JJEs(r';n',E' ~ n,E)'I'(r',n',E')dn'dE'r (7 1) S •
o
s
J3 = JEt(r - s'n,E)ds' (7.2)
o
a With finite-length words in a digital computer the probability is not actually zero, but it is
quite small.
7. Monte Carlo Detectors 177
r
A flux estimate can be obtained at the point by evaluating eqn 7.1 for each
source event and each collision event encountered in the particle tracking.
The integral transport equation can be written in terms of a transfer
kernel as
where \jI is the angular flux and P is a point in phase space. In this
formulation K is a transfer kernel that is equal to the probability that a
particle suffering a collision at P' leaves the collision and arrives at P. S(P)
is the uncollided angular flux at P that arrives from externally applied
sources.
In the first term on the right side of eqn 7.3 we have \jI(P')l:,(P') =
density of particles entering collisions in dP', where the element of phase
space dP' = d3r'dE'dO'. The kernel K can be separated into two terms,
Here 0 is a unit vector in the direction from r' to r . Given that a particle is
not absorbed at a collision, define the probability of scattering from 0' to 0
per steradian, and E' to E, as p(O'.O,E'~E). Then, if the non-absorption
probability for the collision at P' is Pna, the first term in K can be written
p(O'.O,E' ~ E)Pna
(7.4)
Ir-r'1 2
1
p(O'.O,E'~E)= 41t (7.5)
That is, the value of p given by eqn 7.5 is the probability of scattering from
0' to 0 per steradian, which is constant for isotropic scatter in the
laboratory system. The second factor in the expression for K is the familiar
attenuation factor e-P, where f3 is given byeqn 7.2 for s = Ir - r'l.
Applying these definitions to eqn 7.3, and omitting the fixed-source term
S(P), we obtain the collided flux estimate at the point r,
178 Chapter 7
Ir-r'l
where the integrals are over the volume of the problem space, all relevant
energies, and all incoming directions 0' at the collision point r'. To obtain
the collided flux for the next-event estimator in a Monte Carlo transport
calculation we will evaluate the integrand of eqn 7.6 at each collision point.
The solution to eqn 7.3 will then consist of the uncollided flux term S(P)
and the collided flux estimate from eqn 7.6.
For example, assume we have designated a point detector at the location
Pd = (Xd,Yd,~), and our particle random walk has resulted in a collision at Pc
= (x,y,z). This geometry is shown in Figure 7.1. We wish to determine the
probability of the post-collision particle leaving the collision point in the
direction of the detector and traveling from the collision point to the
detector without suffering an intervening collision. This estimate can be
viewed as an anticipatory, or expectation, score. Such an expectation is
valid whether or not the particle in fact scatters toward the detector and
whether or not its next flight path is long enough to reach the detector. We
can obtain an expectation score at the detector point and need never have a
particle actually strike the detector. The mean of the expectation scores will
be equal to the response of the detector to all source particles that have
undergone collisions during the particle random walk.
To score the monoenergetic, post-collision particle flux in a next-event
estimator with isotropic scatter in the laboratory system, and for a single
material with constant cross section, we can assemble the terms in eqn 7.6 to
obtain
(7.7)
Here w is the weight of the particle entering the collision. The non-
absorption probability Pna is the ratio of the scatter to the total cross section,
1:s/1:t • The quantity r is the distance between the collision point and the
detector point,
(7.8)
7. Monte Carlo Detectors 179
z Outgoing
Particle
(u,v,w)
n'
Incoming
Particle
/---------~-------y
x
Figure 7.1. Next-Event Estimator Geometry
Eqn 7.7 is the simplest form for the collided contribution to the next-
event flux estimator. It does not apply for anisotropic scattering in the
laboratory system or for situations in which the path from r' to r passes
through different materials. In addition, energy-dependent problems require
correlating the scattering angle with the post-collision energy. In all of these
cases the correct form for the scattering probability p and the optical path
length J3 must be used.
flux over a spherical shell. The flux detennined for the point detectors
arrayed along the Z axis would obviously be greater for a parallel beam
source directed along that axis than for an isotropic source at the origin.
Therefore we will start the source particles in random directions and score
the flux estimate following each collision according to eqn 7.7. This will
provide the collided portion ofthe total flux estimate for each detector.
To obtain the uncollided flux estimate we must estimate separately the
uncollided score in each detector for each source particle. We could do this
by applying eqn 7.7 to every start particle with w set equal to the start
weight and Pna = 1. However, for a nonnalized, point, isotropic,
monoenergetic source the uncollided flux at a point can be calculated
analytically,
(7.9)
where r is in units of mfp. Using this expression for the uncollided flux, the
total flux estimate will be the sum of the analytic uncollided score and the
Monte Carlo estimate of the collided score. Because the uncollided result is
exact, the variance in the total flux estimate will be equal to the variance in
the collided score. For most complex sources a c1osed-fonn expression for
the uncollided contribution at a point does not exist and this contribution
must be estimated by Monte Carlo methods. In such cases the uncollided
flux estimates will also carry a non-zero variance and the variance in the
sum of the uncollided and collided contributions must include the variances
from both contributions.
The coding we will use for this example is similar to that of Example 5.1
and the problem geometry is the same as that given in Table 5.23. The cross
section infonnation is given in Table 5.24. The list of the modified
subroutines to be used in this calculation is given in Table 7.1. The logical
place to score the collided flux estimates for our point detectors is in
subroutine 'Col' immediately following each collision. A revised
subroutine that perfonns this calculation is shown in Table 7.2. The score
of eqn 7.7 is stored in the variable 'tally' and the tallies are summed for each
detector in the array 'cpart.' In this example, subroutine 'Stats' is used to
compute the estimate of the response for the collided portion of the flux.
The total score for each detector is then obtained by adding the uncollided
contribution. The modified subroutine 'Stats' is shown in Table 7.3.
The results obtained for 103, 104, 105 and 4 x 105 start particles are shown
in Table 7.4. The standard deviations associated with these results, which
are also presented in the table, indicate that the collided flux estimates
should be of reasonable accuracy. However, it can be seen from the table
that these standard deviations do not decrease by the reciprocal of the square
root of the number of particles tracked as they would if the random variables
were normally distributed. Furthermore, the variation of the flux values
among the results is often large compared with the estimated standard
deviations. These peculiarities are inherent in the next-event estimator for
cases where collisions can occur in the vicinity ofthe detector.
A plot of flux versus radius for the four sets of results presented in Table
7.4 is shown in Figure 7.2. Since the same starting seed was used in each
calculation, the results obtained when using a given number of start particles
subsume those obtained using a smaller number of start particles, and thus
the figure shows the effect of increasing the number of particles tracked. It
is clear that the results do not move smoothly towards a stable answer for
every detector as the number of start particles is increased. The flux at
detector ten (r = 9 mfp) is high for 103 start particles but then appears stable
for larger numbers of start particles. The results for detectors eight and nine
(r = 7 and 8 mfp), on the other hand, start low and increase continually with
182 Chapter 7
Table 74
.. Resu Its fIor Example I 7I
Det. Radius 10J Particles 104 Particles 105 Particles 4x10 5 Particles
I 0.1 9.047± .184 9.290±.084 9. I 85±.029 9.214±.028
2 1.0 0.285±.041 0.25l±.012 0.245±.0091 0.252l±.0 15
3 2.0 0.092±.0123 0.092±.0041 0.099±.0026 0.099±.0020
4 3.0 0.0494±.0070 0.0708±.0 119 0.0587±.0018 0.0593±.0016
5 4.0 0.0406±.0153 0.0350±.0023 0.037l±.0009 0.0383±.0016
6 5.0 0.0 I 82±.0025 0.0307±.0056 0.0240±.0007 0.0245±.0004
7 6.0 0.0143±.0027 0.0172±.0017 0.01 77±.0008 0.017l±.0004
8 7.0 0.0073±.0014 0.0086±.0006 0.011O±.0005 0.0149±.0039
9 8.0 0.0046±.0009 0.0045±.0004 0.0065±.0002 0.OO7l±.0002
10 9.0 0.0108±.0055 0.0042±.0008 0.0043±.0003 0.0040±.0002
II 9.9 0.00135±.0004 0.0035±.0023 0.0016±.0002 0.0016±9.9-5
7. Monte Carlo Detectors 183
When the present results are compared with prior estimates of the fluxes
at the detectors, such as those in Figure 3.3, it is clear that they are mostly
reasonable. However, in most cases the point detector flux estimates are
low by more than would be expected from the estimated standard deviations.
Thus the standard deviation appears to be consistently underestimated. As
will be discussed below, one should be skeptical of results obtained using a
next-event estimator unless it can be shown that the problem phase space
has been adequately sampled, and the estimate of the standard deviation for
scores in such detectors should not be trusted.
1.E+1_------------.....,
Start Particles
+1()3
1.E+O -10"
• I (}I
.4 x 105
~
li: 1.E-1
1.E-2
o 2 3 4 5 6 8 9 10
Radius
(7.10)
(7.11 )
That is, no matter how close to the detector point a collision occurs, the
value of the integrand in eqn 7.6 will remain finite. Hence the next-event
estimate of the flux, which is based on this equation, will converge.
The most important region of phase space for scoring the collided
contributions to the flux estimate in a next-event estimator is the region
close to the detector. In practice then, unless the detector is in a void, the
estimator requires that collisions close to the detector point be included in
the score in order to produce a correct answer. The result produced by a
next-event estimator usually converges to the correct solution from below
because in most calculations collisions close to the detector occur rarely
and, until they occur, the mean contribution to the flux at the detector is low.
To estimate the variance in the result from a Monte Carlo detector we
tally the square of each score along with the score itself. For the next-event
estimator this means we sample the square of the integrand of eqn 7.6.
From eqns 7.6 and 7.11 the differential volume, plus the l/r2 term in the
integrand for the (</12) estimator thus becomes
that the scores do not fall on a nonnal curve, and thus that the estimated
variance of the detector scores is not proportional to lin.
The variance in the score for a next-event estimator does not necessarily
improve as more particles are tracked. As a result, instead of a low variance
estimate providing confidence in the result, such a variance could be a
warning sign to the user. A next-event estimator result with a small variance
estimate probably includes few or no contributions from nearby collisions
and thus the flux estimate is probably low. This explains the peculiar trends
exhibited by the results for detectors eight and nine in Example 7.1.
The problem of the infinite variance catastrophe in the next-event
estimator has been studied in detail. A number of methods have been found
for modifying the estimator to produce an acceptable result with a bounded
variance. One such modification is to assume the flux is constant in some
spherical region about the detector and to treat all collisions inside the
sphere identically by defining an average score within the sphere.) This
technique works reasonably well in many circumstances. Another
procedure2 is to score e·r.r/4nro2 for each collision inside the bounding
sphere. The value of the constant ro can be calculated based on the
approximate flux shape expected inside the bounding sphere. Finally, one
may compute the adjoint flux (see Chapter 9) on a surface surrounding the
detector point. In this case, when a particle encounters the surface a score
equal to the product of the fo.ward and adjoint fluxes on the spherical
surface is tallied. Collisions inside the surface are not scored.
Even when the next-event estimator is modified in order to eliminate
unbounded contributions to the variance, it still often underestimates both
the flux and the variance because of the difficulty of obtaining an adequate
sampling of collisions in the vicinity of the detector. One of the advantages
of using a bounding surface modification to the estimator is that such a
modification provides an easy method of keeping track of the number of
collisions that occur close to the detector. If the number of collisions inside
the bounding surface is not consistent with that expected from the flux
estimate obtained for the detector then the phase space near the detector has
probably not been adequately sampled.
In Section 3.3 it was shown that the scalar flux is related to the reaction
rate per unit volume, R, by
(7.13)
186 Chapter 7
(7.14)
where C is the number of collisions per unit time and energy in the volume
V. The volume of space within which we wish to estimate the average flux
4> does not have to be defined by our geometry; i.e., it need not coincide with
a spatial zone in the problem. However, it is frequently convenient to make
a volume detector coincident with such a geometric zone. Obviously
particle tracks must intersect the detector volume, and collisions must occur
in the volume, in order to obtain an estimate of the flux from eqn 7.14. A
detector that uses this equation to estimate the flux in a region is called a
collision-density flux estimator.
An alternative volumetric flux estimator can be obtained from the
definition of particle flux, which is the product of the particle density times
the particle speed (eqn 3.6). The flux is thus equal to the sum of the
distances traveled by all the neutrons that pass through a unit volume of
space per unit time, energy, and direction. Because of this definition, the
particle flux is sometimes referred to as the track length.3 By definition,
therefore, the flux in a volumetric detector is equal to the sum of the lengths
of the particle tracks in the detector that lie within the requisite limits of
time, energy, and angle, divided by the widths of the time, energy, and angle
intervals selected for scoring, and divided by the volume of the detector.
An estimator that scores all particle tracks within a specified volume is
thus a total, or scalar, flux estimator. Such an estimator requires particle
tracks to intersect the detector volume in order to obtain a score, but does
not require collisions to occur within the detector volume. In effect, the
estimator provides information about the flux continuously along the
particle flight paths instead of at collision points. Because collisions are not
required to produce the estimate, a track-length flux estimator can be used in
voids as well as in material regions.
estimators. For our volume detectors we will define regions O.1-mfp thick
about each unit radius interval, plus a O.I-mfp-thick region at the outer edge
of the geometry. These are mathematical regions only and involve no
change in material or zone specifications. We will score the total track
length of all particles passing through these regions as well as the number of
collisions in each region.
The PFC geometry input file for this calculation, 'geom.txt,' is shown in
Table 7.5. The first nine detectors are defined by spheres numbered 1
through 18, which have radii 0.05 mfp on either side of the integer distances
1 through 9 mfp from the source. The tenth detector is defined by sphere
19, which has an inner radius 0.1 mfp less than the outer diameter of the
problem geometry, and an outer boundary coincident with that of the
problem geometry. The first ten of the 21 zones in the problem are detector
zones. The total cross section and non-absorption probability are set to 1.0
for all zones.
The list of the modified subroutines to be used in this calculation is given
in Table 7.6. Subroutine 'Source' is shown in Table 7.7. Although the
library source routine from Chapter 5 starts particles isotropically at the
origin, as is desired in this problem, the library subroutine assumes the
origin is in zone 1. For this problem the origin is in zone 11 and 'Source'
has been modified accordingly. The reason for using zones 1 through 10 for
the detectors is to simplify the scoring of collisions and tracks in the
detector zones. Subroutine 'Bdrx' has been modified to score tracks inside
the detector zones. This subroutine, shown in Table 7.8, calculates the track
length 'delta' needed to leave the current zone. If the current zone, 'nzcur,'
is a detector zone, then the track length times the weight of the particle is
scored in the array' bpart.'
Table 7.6. Modified Subroutmes fior ExampJe
I 72
Subroutine Location
'Source' Table 7.7
'Bdrx' Table 7.8
'Col' Table 7.9
'Stats' Table 7.10
Subroutine 'Col,' shown in Table 7.9, scores collisions that occur in the
detectors, as well as that portion of the track length in the detector that lies
between the previous event and the collision point. The track length in the
zone from the previous event (which could be a collision or a boundary
crossing) is contained in the variable 'delta.' If the collision occurs in a
detector, two SCOres are made: the track length times the weight of the
particle, and the weight of the particle undergoing a collision. These are
scored in the arrays 'bpart' and 'cpart,' respectively. The modified
7. Monte Carlo Detectors 189
t
L=- (7.15)
I~I
where
Because the absolute value of the cosine is used in eqn 7.15 it does not
matter whether 0 is the inner or outer normal.
t
Surface I
r
,..-----1'------, L
Assuming the total cross section in the medium from which the particle
is exiting (the current zone) is Lb the probability of the particle having a
collision in the track length L is
(7.17)
Thus we can estimate the flux in the region of thickness t on the surface of
interest by the collision-density flux estimator, where the reaction rate R is
given by
and w is the weight of the particle being scored. Using eqn 7.14 we find the
flux can be estimated by
7. Monte Carlo Detectors 193
(7.19)
V=At (7.20)
(7.21)
(7.22)
(7.23)
This is the estimator used for scoring the flux on a surface. Because the
surface selected for scoring frequently forms a boundary within the problem
geometry, such a detector is sometimes called a boundary-crossing detector.
It is useful to consider an alternate derivation of eqn 7.23. Recall that
the scalar flux cI> is the integral of the angular flux; i.e., from eqn 3.8
11 211
where <p is the azimuthal angle and 9 is the polar angle as shown in Figure
3.1. Changing variables using J.l = cos 9 and defining 'V(J.l) by
2lt
'V(J.l) = J 'I'(J.l, <p )d<p (7.25)
o
gives
194 Chapter 7
f\jI(Jl )dJl
1
q, = (7.26)
-I
Defining Y(Jl) as the number of neutrons per unit area crossing the x-y
plane in the positive Z direction as a function of Jl, then
(7.27)
Similarly defining r(Jl) for neutrons crossing the x-y plane in the negative Z
direction gives
(7.28)
(7.29)
These integrals may be solved using Monte Carlo by selecting from r(Jl)
and J+(Jl), which is done by observing boundary crossings during the random
walk process, and then scoring W/IJlI. Because we must normalize the result
by the area of the boundary, eqn 7.29 is equivalent to eqn 7.23. Although
we derived eqn 7.29 by assuming a surface parallel to the x-y plane, it is
valid for any surface.
The flux estimator of eqn 7.23 or eqn 7.29 is not bounded. That is, the
score is unbounded when the particle trajectory becomes tangent to the
detector surface. As we saw for the next-event estimator, unbounded scores
lead to erroneous variance estimates. We therefore may legitimately ask
whether we must tolerate unreliable variance estimates for the surface-
crossing estimator.
As was discussed in Section 7.2, accounting for collisions close to a
next-event estimator is essential for obtaining a correct flux estimate from
the detector. Because one cannot exclude contributions from nearby
collisions, special treatment is required in order to produce a valid variance
estimate in such a detector. However, unlike the next-event estimator, the
contribution to the total flux on a surface, of particles traveling tangent (or
nearly tangent) to the surface, is generally small compared with the
contribution of particles incident from the remainder of the 41t solid angle. b
(7.30)
and keep only the isotropic and first order anisotropic terms
(7.31)
(7.32)
Substituting eqn 7.31 into 7.32, considering only the half space J.l> 0, and
neglecting particles having J.l < e, gives
(7.33)
This result is clearly finite for any e > O. As e ~ 0 the log term dominates;
i.e., for small e the variance is logarithmically divergent,
196 Chapter 7
1
if ~aln (7.34)
e
As expected, this result approaches the correct value as e ~ O. That is, the
excluded region of the integral, 0 :::;; J.I. < e, would have contributed an amount
(7.36)
J J- J
& 0 &
(7.38)
-&
(7.39)
instead of eqn 7.23. With this correction the error in the flux estimate is no
longer proportional to 1 - £, but is reduced to the extent that 2/£ is a good
estimate of the mean value of 1I1J.l1 for 1J.l1 < £. The assumption that the flux
over -£ < J.l < £ can be modeled with one anisotropic term is generally good
for small £. The non-stochastic estimate 2/£ for the flux contribution over
this interval usually eliminates systematic bias in a surface-crossing
estimator while still providing reasonable variance estimates. It remains to
choose a value for £ that will keep the variance within reasonable limits
while retaining 2/£ as a good estimate for 1I1J.l1 when I!J.I < £. The value e =
0.01 has proved to be reasonable and in practice, for such a small £, it does
not matter very much whether the 2/£ correction is made.
both equal to one for all zones in the problem, as was the case in Example
7.2.
In order to score a boundary crossing by means of eqn 7.23 we must
determine the point of intersection of the particle track and the detector
surface, as well as the angle between the flight path and the normal to
detector surface at this point of intersection. Since we are using concentric
spherical surfaces as our detectors, the normal to the detector surface at the
point at which the particle track intersects the surface will always be in the
radial direction. For a particle undergoing a collision at the point p, and
departing the collision in direction n, the coordinates Pi = (x,y,z) of the
intersection of the particle track with a spherical surface of radius r = rd
centered at the origin are given by eqn 3.44, where s is determined by eqn
3.49 with ro2 = rio This geometry is shown in Figure 7.4.
z
n
r===-----+--+---y
The results obtained by running PFC with the indicated modifications for
105 source particles are shown in Table 7.16. The flux estimates from
Examples 7.1 and 7.2 are shown for comparison. The surface-crossing flux
estimates shown in Table 7.16 are in excellent agreement with the track-
length fluxes. The apparent discrepancy between the detectors at 9.9 and
10.0 mfp is caused by the fact that the track-length flux is the average flux
within the volume from r = 9.9 to r = 10.0 while the surface-crossing flux is
calculated at these two radii. The next-event estimator has reasonable
results, except for the detectors at 0.1 and 8.0 mfp. For these two cases the
estimated flux is lower than the more-reliable surface-crossing estimate. At
both of these radii the estimated standard deviation is also lower for the
next-event estimator than for the surface-crossing estimator. This is typical
of the next-event estimator in which a poor estimate of the answer is often
associated with a small standard deviation.
-fl:,(E,s)ds
p(r) = eO (7.40)
W -f l:, (E,s)ds
cp=--e o (7.41)
AI~I
detector surface need not be flat. If the surface is curved it is possible for a
trajectory to pass through it at more than one point. In this case the scores at
all ofthe crossing points are included in the flux estimate.
Paths ___
Collision Point
_--t---nt-~
E,n
n
2w -Il:,(E,S)dS
<j)=-e 0 (7.42)
E
Let us return one more time to the problem considered in the previous
examples of this chapter. As in Example 7.3 we will place boundary
crossing detectors at 0.1 mfp and 9.9 mfp as well as at integral multiples of
one mfp, for a total of twelve detectors. This time, however, we will score
the detectors using the expectation surface-crossing estimator.
The geometric description for using PFC to solve this example is .shown
in Table 7.17. For this application we have defined only two zones although
204 Chapter 7
we have included twelve spheres in the geometry definition. The latter will
provide the detector surfaces for the expectation surface-crossing estimators.
The list of the modified subroutines to be used in this calculation is given in
Table 7.18.
results, for the same number of start particles. However, the present
calculation required 93 seconds of execution time versus 37 seconds for the
calculation in Example 7.3. Increasing the run time of the surface-crossing
estimator to match that of the current calculation should decrease the
standard deviations obtained by a factor of 1.59. Thus the expectation
surface-crossing estimator is less efficient than the surface-crossing
estimator for all of the detectors in this example.
therefore, the error in the neutron velocity incurred by using the non-
relativistic approximation at a neutron energy of 14 MeV will be (E - Er)/Er
R: 0.0103. Since 14.1 MeV (the energy of the neutron emitted in a fusion
reaction between deuterium and tritium) is a reasonable upper limit for
neutron energies of interest in many neutron transport problems, and since
this error of about 1% is normally considered acceptable for transport
analysis, we conclude that we can treat the kinematics of neutron motion
non-relativistically. Therefore the speed of a neutron having energy E will
be assumed to be
v=f! (7.43)
Using the conversion 1.60 x 10- 12 erg/eV, the speed of a neutron in cm/sec
as a function of its energy E in eV is
t'=t+~R:t+0.723xl0-6 ~ (7.45)
v ...,E
We may neglect the interaction time of the neutron with the scattering
nucleus compared with the time of flight between collision points because
such interactions occur over distances on the order of the nuclear radius,
which is many orders of magnitude smaller than the distance between
nuclei. Thus we can use eqn 7.45, applied cumulatively to each particle free
flight, to assign a time to every collision, boundary crossing, or other event
of interest in the neutron's random walk.
The energy of particles and time-dependence of events in a Monte Carlo
transport problem are normally scored using a discrete mesh. The
boundaries of these discrete meshes can be selected by the user in a manner
best suited to the particular problem under consideration. The width of the
time and energy steps can be varied at will; however, as always, the
statistics will typically be better with wide steps than with narrow steps.
not necessary to use constant time steps, or even the same time bins for
every detector, we can easily obtain both early- and late-time data at each
detector. Relatively small time intervals have been selected for the time
mesh of the expectation surface-crossing estimators in order to resolve the
particle wave front.
The list of the modified subroutines to be used in this calculation is given
in Table 7.21. A modified version of subroutine 'Source' is shown in Table
7.22. This subroutine uses the standard isotropic point source at the origin
but initializes the 'age' variable in order to track the time of the events in the
particle random walk. The modified subroutine 'Col' is shown in Table
7.23. This routine updates the particle age at each collision site. Particles
that are older than the age cutoff are terminated. The simple version of
Russian roulette used in previous examples is included.
The detectors are scored using the methods of Examples 7.3 and 7.4 with the
addition of the transit time of the particle flight from the source point or
collision site to the detector. The modified subroutine 'Bdrx' is shown in
Table 7.24. This subroutine scores the standard boundary-crossing flux. It
updates the age of the particle to determine the time of each boundary
crossing, and scores the flux estimate in the appropriate time bin. It
terminates the tracks of particles that are older than the age cutoff when they
reach the boundary. Subroutine 'Dist' is shown in Table 7.25. This
subroutine performs the expectation surface-crossing estimation. As is
done for the standard boundary-crossing detector, the code determines the
time of each boundary crossing and scores the flux estimate in the
appropriate time bin. If the extrapolated time exceeds the cutoff the routine
210 Chapter 7
does not produce a score but tracking continues. The modified subroutine
'Stats' is shown in Table 7.26. This subroutine sums the tallies and
performs the variance analysis of the time-dependent detectors.
&$ 2D 8$ 18$
D - + - - - L$=-- (7.46)
ar 2 r ar a V at
(7.48)
(7.49)
D= 1 = 1 (7.50)
3L {1- i~} 3[1- 0.8(1- c)]
t 5 .L t
diffusion and the Monte Carlo results are in fair agreement. At late times
the diffusion result overestimates the flux.
1.E.l
1 . E~
1.E·l
~
u::
1.E·2
1.E.:! 1--_____>------<>------<>------<_----'_----'
0.0 0.5 1.0 1.5 2.0 2.5 3.0
Time (sec)
Figure 7.6. Comparison Between Monte Carlo and Diffusion Results, r = 0.5 em
1.E+O
1.E-2
1.E-4
)(
::J 1.E-6
u:
1.E-8
1.E-10
1.E-12
0 5 10 15 20 25 30
Time (sec)
Figure 7.7. Comparison Between Late-Time Monte Carlo and Diffusion Results, r = 0.5 em
Let us consider once again the problem of Example 4.3 involving the
slowing down of neutrons in hydrogenous media. We assume a point source
of fission-spectrum neutrons released as a delta-function pulse at time zero
at the center of a sphere of water. The scattering is assumed isotropic in the
center of mass system and the particle velocity is obtained from eqn 7.44.
The radius of the sphere is set by the user. We wish to score the time
dependence of the neutrons leaking from the sphere.
The list of the modified subroutines to be used in this calculation is given
in Table 7.31. The cross sections for this example are same as those used in
Example 4.3. Subroutine 'Hydrogen,' shown in Table 7.32, has been
changed from that given in Table 4.10 to reflect the fact that the variables
passed are now in double precision, and to add a calculation of the non-
absorption probability to account for absorption in hydrogen. Subroutine
'Oxygen' is shown in Table 7.33. The only difference between this
subroutine and that shown in Table 4.10 is that necessary to account for the
variables passed being double precision. The necessary input information
for calculating the combined effective cross-section is read by subroutine
'Xsects,' which is shown in Table 7.34. This subroutine reads the atomic
number densities, in units of 10-24 atoms/cm3, of hydrogen and oxygen for
water of nominal density, 0.06692 and 0.03346 respectively. These values
are included in the file 'xsects.txt.'
The microscopic cross section data must be "mixed" to define a
macroscopic cross section set that will represent the effective cross sections
for the mixture of elements or isotopes at the correct density. The particle
flight path is determined using the sum of these cross sections as though the
material through which the neutrons are moving were a single isotope. The
cross sections are mixed in subroutine 'Mxsec,' shown in Table 7.35. This
subroutine calls subroutines 'Hydrogen' and 'Oxygen' in order to calculate
the total energy-dependent cross section for water.
The geometry input for this example is shown in Table 7.41. A system
radius of 30 cm is used. The program scores the leakage in 19 time steps
spaced logarithmically between 10-8 and 10-2 seconds. The calculation used
a constant Russian roulette weight of 0.001 and a survival probability of 0.1
(which increases the weight of particles surviving Russian roulette by a
factor often.)
The results of running the code with 105 start particles are shown in
Table 7.42. A plot of the time-dependent leakage, integrated over energy, is
shown in Figure 7.8. Because the source is not monoenergetic in this
example, there is no arrival wave of neutrons. However, very few source
particles were produced with energies above 107 eV. The speed of a
neutron at 107 eV is about 4.37xl09 cm/sec. Therefore the earliest time for
7. Monte Carlo Detectors 223
1.E+8
1.E+8
1.E+4
i.
z
1.E+2
1.E+O
1.E-2
1.E-4
1.E-8 1.E-7 1.E-6 1.E-5 1.E-4 1.E-3 1.E-2
Time, sec
The points on the curve in Figure 7.8 indicate the total neutron leakage
per unit time for each time step. They are plotted at the upper limit of the
time steps. The first data point indicates that about 4 x 105 nls leaked
between time zero and 10-8 sec. This corresponds to roughly 0.004
probability of a fission spectrum neutron being emitted with an energy
greater than approximately 4.7 MeV, and the neutron subsequently leaking
from the sphere of water without suffering a large-angle scattering event.
224 Chapter 7
The leakage rate increases following the arrival of these leading neutrons
because the fission spectrum produces more neutrons below 4.7 MeV than
above this energy.
Exercises
1. Repeat Example 7.1, but place small spherical regions around each of the
next-event detector points and tally the number-of collisions within each
such region; i.e., score the number of collisions occurring in the vicinity of
each next-event detector.
a. Using eqn 7.14, is the flux result estimated with the detector consistent
with the number of collisions calculated? Show how the results vary
with the size of the spherical regions.
b. Make the boundaries of your spherical regions into both standard and
expectation surface-crossing detectors. How do these two flux estimates
compare with the next-event estimator results? Has your calculation
provided a thorough sampling of the important regions of phase space for
each of the detectors?
7. Monte Carlo Detectors 225
2. Consider a spherical shell of material (see Figure 7.9.) Assume the radial
thickness of the material is 10% of its inner radius. Assume further that the
spherical shell is contained in a box whose inner surfaces are tangent to the
outside of the shell and whose walls are of the same thickness as the shell.
Consider a monoenergetic problem with isotropic scatter in the laboratory
system for which there is an isotropic point source of neutrons at the center
of the shell. Assume a vacuum exists inside the shell and between the shell
and the box. Determine the flux as a function of position on the exterior
faces ofthe box under the following conditions:
a. The shell and box are composed of a pure absorber with a mean free
path equal to the thickness of the shell.
b. The shell and box are composed of a pure scatterer with a mean free
path equal to the thickness of the shell.
c. The shell is composed of a scattering and absorbing material with a
mean free path equal to the thickness of the shell and a non-absorption
probability of 0.5, and the box is composed of a scattering and absorbing
material with a mean free path equal to half the thickness of the shell and
a non-absorption probability of 0.9.
0.1 r
IF. H. Clark, "Variance of Certain Flux Estimators Used in Monte Carlo Calculations," Nucl.
Sci. Eng. 27, 1967, pp. 235-36; L. L. Carter and E. D. Cashwell, Particle Transport
Simulation with the Monte Carlo Method, U. S. Energy Research and Development
Administration, Washington, D. C., 1975, pp. 54-55.
2 S. K. Fraley and T. J Hoffman, "Bounded Flux-at-a-Point Estimators for Multigroup Monte
Carlo Computer Codes," Nucl. Sci. Eng. 70, 1979, pp. 14-19.
J S. Glasstone and M. C. Edlund, The Elements o/Nuclear Reactor Theory, D. Van Nostrand,
Princeton, NJ, 1952, pp. 47-48.
4 J. C. Saccenti and W. A. Woolson, "A Comparison of Air-Over-Ground Transport
Calculations Using Different Cross Sections," Nuclear Cross Sections and Technology,
NBX SP 425, National Bureau of Standards, Washington, DC, 1975, p. 431.
Chapter 8
Nuclear Criticality Calculations with Monte Carlo
Here L* is the total cross section for neutron transfer from n' .E' to n.E.
including fission events. Eqn 8.1 has solutions of the form
If the spatial dependence of the particle angular density 'I' at a given time
can be expanded in terms of the eigenfunctions Nj • then at late times it is
227
228 Chapter 8
1 {fN(r,O,E,t+t)drdOdE}
ao ~-ln-"--=--------- (8.5)
t fN(r,O,E, t)drdOdE
vO-VN k +v1: t N k =
HI v(E')1: (r;O',E' ~ O,E)N k(r,O',E')dO'dE'
x.. f
x (8.6)
where the subscript f refers to fission and the subscript x refers to all other
interactions that result in neutrons appearing in the phase space. The factor
of 41t in the second integral is used to normalize the spectrum of fission
neutrons.2 The integrals are to be taken over all values of the variables that
can contribute to the eigenfunction density Nk(r,O,E). In this chapter we
will examine ways to determine the eigenvalue 11k, and hence the effective
multiplication factor k, using Monte Carlo methods. This effective
8. Nuclear Criticality Calculations with Monte Carlo 229
(8.7)
LMi
k=...:..i=....:.!__
n
(8.8)
LNi
i=!
The estimate of eqn 8.8, if made simultaneously with the estimate based on
the ratio of generations method, produces the same estimate of k as eqn 8.7,
within rounding errors, but provides a separate estimate of the variance of
the result. The estimate of the variance based on the total subsequent
230 Chapter 8
produced by neutrons in the previous generation and track them until they
are absorbed (in which case they may produce fission neutrons), or until
they escape from the system. We will include both a generation tally and a
total subsequent population tally. For the former, after all of the source
neutrons in a generation have been tracked we will tally the total number of
next-generation fission neutrons produced in the system by those source
neutrons. We will then track the next generation of neutrons based on these
fissions. The number of neutrons resulting from fissions in the current
generation divided by the number resulting from fissions in the previous
generation will provide an estimate of k. By eqn 8,7, just as with any Monte
Carlo estimator, our answer will be the mean of such estimates of k obtained
from as many generations as we wish to track. For the total subsequent
population estimate we will determine the total number of progeny for each
source neutron tracked. By eqn 8.8 the total number of progeny divided by
the total number of source particles will provide a second estimate of k
(which should be the same as that of eqn 8.7 within rounding errors), and
the associated variance (which will not be the same as that of the first
estimate.)
The technique of including an estimate of the desired answer in a tally
only after tracking many neutrons instead of after each neutron track is
known as the batch method of scoring in Monte Carlo. This method was
discussed briefly in Example 6.4, and any of the previous examples in this
book could have used the batch method. When using batches, an estimate of
the desired result is obtained from the average of the scores of some fixed
number of source neutrons rather than from each neutron separately. For
statistical purposes this average, called the batch average, is treated as a
single estimate of the result, and both this estimate and its square are tallied
in the usual way. The final result of the calculation, and the variance of this
result, are then obtained from eqn 2.39 and either eqn 2.40 or 2.41.
In the present case, a batch is equal to a generation. Obviously there is a
tradeoff between the number of neutrons per batch and the number of
batches. A balance is usually sought to ensure that each batch will provide a
reasonable sampling of the portions of phase space that contribute
significantly to the desired result, and that enough independent estimates of
the random variable will be obtained to provide reasonable statistics in the
final result.
Except when the system is exactly critical, the average number of
neutrons in successive generations in a multiplying system will tend to be
either increasing or decreasing. Thus a problem encountered in using the
generation method to estimate the effective neutron multiplication is that of
ensuring that the population neither dies out nor becomes so large that the
calculation time is excessive. Usually the best option is to maintain a
constant population of source neutrons in successive generations. To do this
232 Chapter 8
Both the multiplication factor and the generation time will depend on the
fraction of incident neutrons absorbed per collision. However, for a critical
system, in which k = 1, the exponent will be zero and the result will not
depend on the number of incident particles that survive the collision. Thus
our two cases of c and c-l neutrons leaving a collision in the next generation
*
should give identical results for 'I' when k = 1, but different results for k 1.
Before valid results can be obtained from the tally to be used in this
example we must ensure that a fundamental mode flux distribution has been
established. Because we base the spatial distribution of the source in a
given generation on the fission distribution that resulted from the neutrons
in the previous generation, we anticipate that we will eventually obtain a
nonnal-mode flux from any initial source distribution. This is certainly true;
however, the rate at which the flux converges to the fundamental mode will
depend on the initial distribution and the specifics of the problem. Our
calculation will converge faster if we choose an initial source distribution
that is similar to the fundamental mode than if we choose one quite different
from the fundamental mode. In addition, the flux in a simple, fast neutron
8. Nuclear Criticality Calculations with Monte Carlo 233
system, such as a fast critical assembly, will converge faster than that in a
complex thermal reactor system.
To examine the convergence of the initial flux guess to the fundamental
mode we will solve this example three times, using three different fission
distributions for the first generation source. The first calculation will use
the simplest possible distribution: an isotropic point source at the center of
the sphere. The second calculation will start with a uniform distribution of
fissions throughout the spherical volume of fissile material. The third
calculation will use the spatial flux distribution for the diffusion theory
buckling of a critical sphere. In this last case the radial dependence of the
fission distribution will be proportional to sin(1trfRc)/r, where Rc is the outer
radius of the system. When k = I, Rc is the critical radius. We anticipate
that an equilibrium spatial distribution of fission events will be established
in a few generations for all three cases.
The PFC geometry input file, 'geom.txt,' used for this problem is shown
in Table 8.1. The total cross section is set to 3.0 to give a sphere of fissile
material with a radius of three mean free paths for the criticality calculation.
The value for c will be varied to determine its effect on k-effective.
Table B.l. Geometry Description for Example 8.1
o There are no RPPs
1 There is 1 sphere
0.0, 0.0, 0.0, 1.00 center coordinates and radius-sphl
2 There are 2 zones
I, 1 Zone 1 has one body, 1
I, -I Zone 2 has one body,-I
1.20 r--------------.,
1.15
1.10
.>< 1.05
1.00
0.95
3 5 7 9 11 13 15
Batch
particles are tenninated only by leakage from the system. In this case c - 1
particles are emitted from each collision as next-generation particles. The
modified Subroutine 'Stats' is shown in Table 8.11. Because each particle
can have more than one collision, and make more than one contribution to
the score, the method for accumulating statistics has been modified
accordingly.
calculated values for k are not the same as in the prior calculation. This
occurs because the generation time will change depending on the number of
neutrons leaving a fission event that are assumed to be in the current
generation (see eqn 8.9).
estimate of the variance would still have been more accurate when based on
the total subsequent population method than when based on the generation
method.
(8.10)
(8.11)
where
(8.12)
The matrix aij, which is called the fission matrix, can be calculated using a
single generation of neutrons. Once the matrix is known an initial guess for
Soj can be used in eqn 8.10, and that equation applied repeatedly until a
stable value ofk is obtained from eqn 8.11.
Here we have introduced the tenn "region" to designate a portion of the
geometry selected for defining this matrix. All fissile material in the
problem geometry must be in some region, and the regions must not overlap.
Obviously aij *- 0 only when both regions i and j contain fissile materials.
While non-fissile materials may be included in a region, volumes with only
non-fissile materials need not be in a region; i.e., while the regions must
encompass all the fissile material in the geometry, they need not encompass
those portions of the geometry that contain only non-fissile materials.
Although one may define the matrix regions such that they are related to, or
coincident with, the zones used to define the problem geometry, this is not
required. We will assume that the regions used to define the fission matrix
are independent ofthe zones used to define the problem geometry.
Calculation of the fission matrix requires starting a number of neutrons
in each region and detennining the response produced by these neutrons in
all regions, including the source region. If the system fundamental-mode
spatial flux distribution is known, then the source used in the calculation for
each region can be distributed accordingly. In this case the size and number
of the regions should be relatively unimportant and a small number of
regions may provide an acceptable result. However, if the fundamental-
mode flux is unknown, the source distribution used to calculate the elements
of the matrix must be chosen in some approximate manner. A common
choice is to use a unifonn distribution over the source region. In this case,
because the source distribution will not be equal to the nonnal mode
distribution, many regions may be required in order to obtain an accurate
estimate of k. Fortunately the error produced in the matrix elements by an
incorrect initial source distribution tends to be of second order because the
value of the matrix element depends only on the distribution across the
region and not on the distribution over the entire system of fissile material.
In the limit of a large number of regions the sensitivity of the results to the
source distributions used to obtain the fission matrix is small.
244 Chapter 8
a
One could specifY a convergence criterion, such as some maximum fractional change in k
per iteration, and perform the iteration until that criterion is met.
248 Chapter 8
obtained k ~ 0.986, it appears that these five regions are not sufficient to
obtain a good result using a uniform source to calculate the matrix elements
for this system.
END
Figure 8.2 shows the results for k obtained from running the current
program assuming c = 1.2 and using various numbers of regions. These
results used 106 start particles per region to determine the matrix elements,
and 20 iterations of eqn 8.10 to obtain the estimate of the system
multiplication. The figure shows that the result changes little provided
fifteen or more regions are used, and hence we will use fifteen regions to
determine the effective multiplication of our system as a function of c.
250 Chapter 8
0.98
0.98
".
0.84
0.82
0.88
1 3 5 7 8 11 13 15 17 18
Number of Regions
Subroutine 'Input,' shown in Table 8.24, allows for input of both batch
data and the matrix calculation data. The number of regions for this
example has been set to 15 by this subroutine. Subroutine 'Source' is
shown in Table 8.25. This subroutine uses what would be 20 equal-radial-
width regions and collapses them into 15 regions. The five smallest of the
20 regions are put into the first of the 15 regions. The next two are placed in
the second region of the 15, and the remaining regions are unchanged. This
modification was made because the volumes of the inner regions in the set
of 20 are so small that the number of fissions occurring under a
fundamental-mode neutron distribution would be insufficient to obtain good
statistical results without greatly increasing the number of particles tracked
8. Nuclear Criticality Calculations with Monte Carlo 253
per generation. Even so, the bank size for the common 'bank' has been
increased from 20,000 to 200,000. This will allow us to increase the
number of particles tracked per generation versus the number of batches
used.
Subroutine 'Col,' shown in Table 8.26, provides for scoring the matrix
values at each collision. This version of 'Col' assumes that all neutrons
experiencing collisions are absorbed and that all secondary neutrons are in
the next generation. Two matrices are used for scoring the matrix method in
this example. The first matrix, 'aij,' is used to tally scores for all neutrons,
as was done in Example 8.2. The second matrix, ' bij,' is used to tally scores
for neutrons on a batch basis. This will permit a matrix calculation of k for
each batch. Subroutine 'Col' also tallies scores for the generation and the
total subsequent population methods.
Subroutine 'Stats,' shown in Table 8.27, calculates the results for the
four different methods of determining k. Subroutine 'keff,' shown in Table
8.28, is similar to that used in Example 8.2. The primary difference is that
data are passed in the subroutine call rather than in a common block. This
change is made so that the subroutine can be used to process both the 'aij'
and the 'bij' matrix data received from 'Stats.' The common block 'in' is
also modified to account for both batch and matrix input data.
254 Chapter 8
Table 829
. Resu Its f;or ExampJe
I 83. ,c= 1217
.
500 Batches of 18000 50 Batches of 180000
Neutrons Neutrons
Ratio of Generations 0.999329 ± 0.0001626 0.998928 ± 0.0001649
Total Subsequent Population 0.999329 ± 0.0001555 0.998928 ± 0.0001556
Matrix by Generation 0.999346 ± 0.0001674 0.998867 ± 0.0001725
Standard Matrix 0.999263 ± 0.0004010* 0.998859 ± 0.0004014*
* Results for standard deviation are approximate and are shown here to be high.
It has been shown in the literature that the matrix method can be used to
accelerate the convergence of an initial flux distribution to the fundamental
mode.6 As noted, the iteration that produces a criticality estimate using the
~j matrix also provides an estimate for the source distribution by region.
One can use this estimate of the source distribution, obtained from the first
generation of a matrix-by-generation calculation, as the source for the
second generation. If this process is repeated for several generations,
convergence to the fundamental mode can be achieved faster than using the
generation method alone.
258 Chapter 8
transport corrected and are designated by atr. b Finally, the cross section for
scattering from group i to group j is given by ai-4j. For the uranium
isotopes used in this calculation, within-group scattering is always possible
(ai-4i *- 0) but up scatter is not (ai-4j = 0 for j < i.) All scattering is assumed to
be isotropic in the laboratory coordinate system. Again, the components
b The transport-corrected cross section is equal to the total cross section less the product of
the scatter cross section and the average cosine of the scattering angle for the material and
neutron energy of interest. See G. I. Bell and S. Glasstone, Nuclear Reactor Theory, Van
Nostrand Reinhold, New York, 1970, p 104.
8. Nuclear Criticality Calculations with Monte Carlo 261
must be "mixed" to obtain effective cross sections for the mixture. The
number density of uranium atoms at 18.80 glcm3 and 93.86% 235U is
approximately 0.04815 x 1024 atoms/cm3 •
The program in Table 8.33 was written to process the microscopic cross
sections into a macroscopic set to be used by PFC. This cross-section-
mixing program is not an essential part of the example calculation and the
mixing could have been done by a number of methods, including hand
calculation. It is included here to demonstrate how the cross sections were
prepared for input to the modified version of PFC discussed below.
Specifically the absorption, fission, and scatter cross sections have been cast
into the form of cumulative probability distributions from which PFC will
select an outcome from a neutron collision. The energy down scatter matrix
has likewise been converted into a cumulative probability distribution from
which a post-collision energy can be obtained.
The code in Table 8.33 reads the cross section data from a file called
'cross.txt. ' The contents of the file consist of the group velocities and
fission spectrum given in Table 8.30, followed by the specific cross section
data for 235U and 238U. These latter data are read by group in the order of
group number as listed in Tables 8.31 and 8.32. That is, in the
'loop_over_elements' and 'loop_over_groups' the code sequentially reads
the rows of these tables. Thus the data in these tables must be prepared in
the proper order and placed in the file ' cross.txt. ' After mixing and
summing the data for 235U and 238U the code calculates the cumulative
probability distributions for the fission spectrum, 'gpnu.' It also
calculates the probability of fission, 'pfis,' of absorption without fission,
' pabs,' and of scatter, 'pscat.' The normalized down scatter matrix Li~j is
then converted into a cumulative probability distribution and stored in the
array 'pggscat. ' The total macroscopic cross section for each energy group,
in units of cm- I , is stored in the array 'sigt' for use in determining neutron
flight paths. The processed data are stored in a file called 'mixed.txt' which
will be read by PFC.
The geometry input file ' geom.txt' for Example 8.4 is given in Table
8.34. The radius of sphere 1 will be varied to determine the critical radius
of the system. The list of the modified subroutines to be used to perform the
Godiva criticality calculation is given in Table 8.35. Both the generation
and the total subsequent popUlation methods will be used to determine the
system multiplication. The changes in the subroutines arise primarily from
the use of a multi-group cross section set and the need to account for
processes other than fission that can occur at collisions.
262 Chapter 8
PFC, with the modified subroutines listed in Table 8.35, has been
executed repeatedly using the generation and total subsequent population
methods in order to estimate the effective neutron multiplication of bare,
enriched-uranium, Godiva-like spheres with radii from 8.0 to 9.5 cm. The
results of these criticality calculations are shown in Table 8.40 and Figure
8.3. All of the calculations used uniformly distributed first-generation
sources and 60 batches of 10,000 particles each ('nbatch' was set to 50 and
'ntransient' set to ten.) The results of the two calculations of k are in
excellent agreement for every system radius.
If we perform a linear regression on the results for r versus k in Table
8.40 we obtain as the best fit for the radius r(k) = 10.72 k - 1.96. This fit
predicts that the system will be prompt critical at a radius of 8.758 ± 0.025
cm, or a critical mass of 52.90 ± 0.45 kg. This calculated radius is 0.6 %
greater than the published radius while the calculated critical mass is 1.8 %
above the published value. Alternatively, if we assume k varies linearly
with radius over the range 8.5 to 8.75 cm, we obtain a critical radius of 8.74
8. Nuclear Criticality Calculations with Monte Carlo 267
± 0.02 cm. This corresponds to a critical mass of 52.58 ± 0.44 kg. These
latter values are closer to the published values than the linear regression
results that consider radii from 8.0 to 9.5 cm. Since the neutron
multiplication is not linear in radius, an improvement in the result can be
obtained by examining the region around 8.74 cm in greater detail than has
been done here.
Reference 8 provides two benchmarks for the Godiva assembly. The
calculations given here are ·for the idealized, pure metal benchmark. The
second benchmark reflects the actual Godiva composition. This second case
has a density of 18.74 glcm3, assumes 93.71 weight percent 235U, and has a
critical radius of 8.74 cm. Performing the example calculation on this
second configuration provides results that are also about 0.2% greater in
radius than the benchmark value. One of the reasons for the error in both
cases is that the benchmarks include about 1.02 weight percent 234U, which
is treated as 238U in these example calculations. The reactivity worth of
234U is slightly greater than that of 238U and the present results could be
improved by including 234U in the calculation.
1.08 r---------------,
1.04
~ 1.00
0.96
Exercises
2. Find the critical radius and critical mass of fissile material for a
homogeneous sphere of 235U and water. Assume isotropic scatter in the
center of mass to account for neutron downscatter in the water and use the
Watt fission spectrum for fission neutrons. Assume a uranium density of
1.5 glcm3• Use the Hansen-Roach 16-group cross sections for 235U. Use the
subroutines 'Hydrogen' and 'Oxygen' from Table 4.10 for water. [Hint:
Assume mid-energy of each group for all uranium interactions, continuous
energy in water.]
1 See G. I. Bell and S. Glasstone, Nuclear Reactor Theory, Van Nostrand Reinhold Co., New
York, 1970, pp. 37 ff. See also P. K. MacKeown, Stochastic Simulation in PhYSics,
Springer-Verlag, New York, 1997, pp. 320 if.
2 Bell and G1asstone, op..cit., p 9.
3 S. Glasstone and M. C. Edlund, The Elements ofNuclear Reactor Theory, D. Van Nostrand,
Princeton, NJ, 1952, pp. 292-93.
4 For an alternative fonnulation see L. L. Carter and N. 1. McConnick, "Source Convergence
in Monte Carlo Calculations," Nuc Sci Eng 36, 1969, pp. 438-41. See also M. H. Kalos, F.
R. Nakache, and 1. Celnik, "Monte Carlo Methods in Reactor Computations," Chapter 5 in
Computing Methods in Reactor PhysiCS, H. Greenspan, C. N. Kelber, and D. Okrent, eds.,
Gordon and Breach, New York, 1968, pp 420-21.
S Carter and McConnick, op. cit.
6 Ibid.
7 William H. Roach, "Computational Survey of Idealized Fast Breeder Reactors," Nuc Sci
Eng 8, 1960, pp. 621-51. See also Gordon E. Hansen and WilIiam H. Roach, "Six and
Sixteen Group Cross Sections for Fast and Intennediate Critical Assemblies," LAMS-2543,
Los Alamos Scientific Laboratory, Los Alamos, NM, 1961.
8 G. E. Hansen and H. C. Paxton, "Reevaluated Critical Specifications of Some Los Alamos
Fast-Neutron Systems," LA-4208, Los Alamos Scientific Laboratory, Los Alamos, NM,
1969.
Chapter 9
Advanced Applications of Monte Carlo
269
270 Chapter 9
(9.1)
(9.2)
Here fl and f2 are probability density functions that reflect the baseline and
perturbed cases, respectively. We evaluate II and 12 in the usual manner,
obtaining estimates 9 1 and 92, respectively. Then we calculate
(9.3)
where
(9.4)
(9.5)
(9.6)
then the variance in the difference between the two estimated quantities is
(9.7)
(9.8)
and Yin eqn 9.3 are positively correlated, cov(9.,92 ) > 0 and the variance in
the estimate for ~9 can be much less than that given by eqn 9.9.
Positive correlation between the results obtained in two similar Monte
Carlo calculations can be obtained by correlated sampling; i.e., by ensuring
that every particle random walk that does not involve an interaction in the
perturbed portion of the problem is the same in both of the calculations.
Thus as the effect of the perturbation goes to zero the practitioner is assured
that the two calculations will converge to the same result, independent of the
statistical uncertainty in the individual answers, provided the same number
of particles is tracked in both calculations. That is, although the absolute
uncertainty in the result will remain as determined in the individual
calculations, the relative uncertainty between the two calculations will go to
zero as the calculations become identical. The only difference between the
two calculations will be the changes produced by particle interactions, or
other events, in or involving the perturbed region of the problem. The
statistical uncertainties in the two answers will not apply to the uncertainty
in the difference between the two results.
Under the conditions postulated, it is no longer essential that the
individual uncertainties in the two answers be small, but only that the
uncertainty in the difference between the two results be small. Instead of
concealing the perturbation, the uncertainties in the separate answers
become almost irrelevant. However, in analogy with the phase-space
sampling requirements discussed repeatedly in the previous chapters, a valid
perturbation result still requires a thorough sampling of the phase space of
the perturbed problem parameters. Obviously if particle tracks do not
thoroughly sample the perturbed portion of the problem phase space, the
answer obtained may be wrong.
The key to correlated sampling in Monte Carlo transport is to make sure
that corresponding particle tracks in the baseline and the perturbed
calculations use the same random number string. In this way any particle
that does not encounter the perturbed region of the problem will score the
same in both calculations. By ensuring that both calculations are identical
except for tracks with interactions in the perturbed region, and using the
same number of start particles in each, the results will differ only insofar as
the perturbation influences the answer.
There are several techniques that might be used to ensure identical
random number sequences for particles tracked in correlated calculations.
For example, one could use preset, sequential portions of a random number
string for each start particle, or one could use randomly selected segments of
such a string. In the first method the portion of the random number string
used by sequential start particles would be selected by choosing a fixed
interval in the string between such start particles. That is, the initial random
numbers used for successive particles would be chosen in some
9. Advanced Applications of Monte Carlo 273
deterministic way, such as at fixed intervals. If the user can be assured that
the maximum length of the random number string used for any particle in
either the baseline or the perturbed calculation is less than some integer N,
then each particle can start with the random number that is N steps forward
in the string from that used by the previous particle.
In the second method each start particle is assigned its own start random
number from a list produced separately from the random number generator.
For example, the user might specify an initial random seed. The seed plus
one could be used to initialize the random number string for the first
particle, the seed plus two used to initialize the string for the second particle,
etc. Obviously, the interval between the starting seeds for each particle
could also be chosen as something other than one.
Should the user wish to introduce an additional degree of randomness
into the selection of start random numbers, it is possible to make use of a
second, separate, and independent random number generator in correlated
sampling. One generator would be used in the usual fashion to obtain
random numbers for the particle random walk. The other random number
generator, which generates a sequence of random numbers that is different
from that of the first generator, would be used to obtain the seeds for the
first random number generator from which each correlated particle random
walk is initiated.
The method of starting each particle track with a different start random
number produces sequences for particle tracking that are positioned more or
less randomly along the full period of the first random number generator.
The advantage of this method is that the maximum length of the random
number chain that might be used by any particle need not be estimated. The
disadvantage is that there is always the risk that a portion of the string used
in the random walk of one particle will be repeated in that of another
particle, perhaps more than once. Although this is an unlikely event for
long-period random number generators (long with respect to the length of
the string used in the random walk for any single particle in the correlated
calculations), it will happen · with some (hopefully small) regularity.
However, in particle transport some rare overlap of random number strings
in particle tracking generally does not introduce a discernible bias in the
results.
For correlated sampling it is necessary to know how the generator is
initialized. The function 'fltrn,' for example, is initialized with a single seed
that consists of an integer from one through 2147483646 (see Appendix.)
The first random number, and the subsequent sequence of random numbers,
are uniquely defined by this seed. Other random number generators,
especially those incorporating special techniques in order to produce very
long periods, may require several parameters to be set in order to define a
unique and reproducible string of random numbers. Random number
274 Chapter 9
SUBroJI'INE sITrloot (ioot) sITrloot returns the =rent ran:lan rnnber seed
CXMOI/srarrlseedliseed
ioot=iseed
RE:l'URN
END
276 Chapter 9
The program in Table 9.1 also gives the ending seed value from
generator B. This allows the user to start a subsequent calculation using this
value as the initial seed, with assurance that the new run will not repeat the
same random number strings used in the previous run. The ending seed for
the last column of results shown in Table 9.3 was 1548773083. As
expected, this is the value used to start the sequence shown in the last
column of Table 9.4. Any string of random numbers that is started with the
seed values shown in the top row of these tables will remain identical for as
long as the string is continued.
(9.10)
Because we assume a start particle weight of one, the average of the weights
of particles passing through the slab in this case compared to the total
number of start particles is
(9.12)
where Xj = 1 for particles that pass through the slab, and zero otherwise; i.e.,
a binomial distribution. Since the square of the weight is equal to the
weight,
(9.13)
(9.14)
For c = 0, z = I, and z' = 1.01 this gives the results shown in Table 9.5.
dP d -z -z
-=-e =-e (9.15)
dz dz
(9.16)
280 Chapter 9
Therefore, using a linear estimate for dP/dz with the given values of z, we
would expect to underestimate the correct exponential decline of the number
of particles penetrating the slab with increasing slab thickness by a small
amount, even with perfectly correlated Monte Carlo calculations.
Using the results of our correlated calculations for c = 0 and seed = 1,
and assuming a linear approximation for dP/dz at z = 1, we obtain the
estimate
Thus we see the Monte Carlo estimate for dP/dz at z = 1, for c = 0 and seed
= 1, is about 2% low compared with the correct value of -0.3679, and about
1.3% low compared with the exact linear approximation. The mean value of
our estimate for dP/dz over the five cases is shown in Table 9.10. This
average is within about 1% of the correct value, and within 0.5% of the
analytical linear approximation result.
(9.19)
From eqn 2.41, the standard distribution of the estimate of the average of
dP/dz is lower than this by a factor of ll~(n-l) and therefore cr ~ 0.0031.
That is, we estimate the uncertainty of the estimate of the mean at about
0.4%, and to one standard deviation, for c = 0.5,
dP ~(x)
dz I
z;) ~ - ; : ; : = - 0.3720 ± 0.0031 (9.20)
Table 9.12. Results for dP/dz and (dP/dzt from Five Independent Pairs of Correlated Runs
Seed (x), z = 1 (x), z' = 1.01 dP/dz (dP/dz)2
1 0.445960 0.442202 -0.3758 0.14123
2 0.445583 0.441855 -0.3728 0.13898
3 0.446058 0.442384 -0.3674 0.13498
4 0.445736 0.442105 -0.3631 0.13184
5 0.446067 0.442258 -0.3809 0.14508
J
(f,g) = f(O) )g(O) )dO) (9.21)
where the integral is over the range of the variable 0). Let us consider an
operator 0 that acts on the function f and a similar operator 0+ that acts on
the function g. Then if
(9.22)
(9.24)
(9.25)
where
(9.26)
+ Hdn'dE''l'Ls(r;n,E ~ n',E')'I'+]
By interchanging the integration variables n' ,E' and n,E in the last term
of one of these equations it is easy to see that the last terms in the integrands
are equal. The difference between the gradient terms in these integrands is
(9.29)
Because the gradient does not operate on direction we can bring the
directional vector inside the gradient operation; i.e., n.v = V.Q. Applying
this we can combine the two terms in eqn 9.29 and, using the divergence
theorem, see that
(9.30)
where the surface integral is taken over the outer boundary of the problem
geometry. When we apply the vacuum boundary conditions 'I'(r,n) = 0 for
n. n < 0 and Y(r,n) = 0 for n • n > 0, it is clear that the surface integral
is equal to zero. Therefore
(9.31)
and thus F+ is adjoint to F. Eqns 9.25 and 9.26 define the steady-state
adjoint transport equation.
There are only minor differences between the adjoint and forward
formulations of the transport equation. In the adjoint equation the gradient
term is positive rather than negative, and the collision integral is performed
over the "scattered to" variables of the scattering function rather than the
"scattered from" variables. Thus it should be possible to solve the adjoint
equation using the same general approach used to solve the forward
equation. That is, we should be able to perform random walks on adjoint
particles using substantially the same tracking routines we have used for
forward calculations.
284 Chapter 9
To solve the adjoint transport equation using the Monte Carlo random
walk routines we have already developed it is reasonable to consider a
function in which the signs of 0 and 0' are changed.) Such a function
would have a negative divergence term in eqn 9.26 and a boundary
condition of zero for n • 0 < 0 at points on the outer boundary. Thus the
equation for this function would look much like that for the forward flux.
This new equation could be solved using the random walk process defined
for the forward transport equation, but with a modified scattering kernel.
That is, the direction of travel of the adjoint particles, which by convention
we will call adjunctons, is actually opposite to that of the value of 0 used in
the calculation.
One may envision the physical process of adjoint transport as one in
which the adjunctons flow in the reverse direction from that of the
"forward" -flux. In many practical transport problems the location of an
absorption event is the desired result of the calculation. An absorption is the
last act in the life of a particle, and for such problems the adjoint flux would
start with an absorption. For example, let us consider the use of Monte
Carlo techniques for tracking adjunctons in a problem in which the quantity
of interest is the neutron absorptions inside the system. The first task is to
determine the location of an absorption event so that an adjuncton track can
be initiated from that point. Once this adjoint source location is determined,
other factors such as the energy of the neutron that was absorbed and
possibly the time at which the absorption occurred, could be selected and
assigned to the adjuncton. The last step in the adjoint source definition is to
select the direction from which the absorbed neutron could have come.
Given the location of the absorption, and the direction of travel of the
incoming neutron (which is opposite to that of the outgoing adjuncton), the
cross section information for the materials along this path can be
determined. This cross section information is used to select the initial free-
flight path of the adjuncton from among those possible paths that the
neutron could have traveled to reach the point of the absorption. The end of
this path defines the first adjuncton collision point. At each adjuncton
collision point the random walk must determine how an incident neutron,
traveling in the forward direction, could have produced a post-collision
neutron having the properties of the incident adjuncton. A new energy and
direction of travel is selected after each adjuncton collision, based on this
determination.
If the adjuncton has a collision at a point from which it is not possible for
a real source neutron or post-collision neutron to have originated, the track
is terminated. For example, if a collision event occurs in a pure absorber the
adjuncton is captured because no forward neutron could have departed from
that point in space. If an event occurs at a location from which it is possible
9. Advanced Applications of Monte Carlo 285
for a neutron to have departed (e.g., arrived from elsewhere and scattered)
the track continues.
The adjoint tracking is continued until the adjuncton either escapes from
the problem geometry or is captured. If an event location is found at which
a neutron could have been born then there is a match between the adjuncton
(a backward-traveling neutron) and a real source (a forward-traveling
neutron) and the match is scored. This match can occur at a forward source
point within a problem geometry or at the boundary of the geometry. The
match can be angular dependent, energy dependent, and time dependent.
The score that is made will depend on the magnitude of the forward source
and the weight ofthe adjuncton.
The adjoint flux is a direct measure of the importance of forward
neutrons that are, or might be, present at particular points in phase space.
Locations in phase space that are visited frequently by adjunctons would
give high scores if source particles were present, while locations seldom
visited would give low scores.
In general, in order to solve a problem for a response of interest using
adjoint transport, the adjoint source must be related to the response of
interest. We can see the effect of this association for a number of different
cases by multiplying the forward transport eqn 9.23 by \{'+ and the adjoint
eqn 9.25 by \{', and integrating over the phase space variables of energy,
direction, and volume. Differencing the resulting equations and applying
the divergence theorem gives 2
(9.33)
If S+ is defined as the response per unit flux, then the right side of eqn 9.33
is equal to the total response to the forward flux. Similarly, the S term on
the left side is equal to the total response to the adjoint flux.
In the case where there is not a vacuum boundary condition, defining
For the case S = 0 and S+ = La within the volume defined by the surface A,
the right side of eqn 9.35 is the negative of the reaction rate given by eqn
3.9. Further, if the adjoint flux has a vacuum boundary condition, then 'l'+out
is zero at the boundary and the left side of eqn 9.35 is the negative of the
adjoint leakage times the forward source incident on the surface. For this
case, changing the signs of both sides of eqn 9.35 shows that the probability
of a particle entering the surface and not leaking (the left side) is equal to
the probability that it will be absorbed (the right side).
(9.36)
..
.. Parallel
Moderator
.. Beam
.. Source
..
Figure 9. J. Parallel Beam Source Incident on Spherical Detector
The list of modified PFC subroutines required for calculating the number
of absorptions in the detector using the forward transport mode is shown in
Table 9.13. The modified subroutine 'Source' is shown in Table 9.14. This
subroutine starts all particles just inside the +Z surface of the sphere,
traveling in the -Z direction. The source particles are selected evenly from
across the beam using eqn 9.36. Subroutine 'Col,' shown in Table 9.15,
scores all collisions in the absorber. Subroutine 'Stats,' shown in Table
9.16, calculates the statistics for the results and normalizes these results to a
beam with an intensity of one particle/cm2/sec. The geometry description
for this calculation is shown in Table 9.17. The cross section data provide
for a pure absorber in zone one, with a total cross section of unity and a non-
absorption probability of zero. Zone two, the moderator, has a non-
absorption probability c = 1.0 and a total cross section of unity.
The probability of an incident neutron being absorbed, obtained by
tracking 106 particles incident on the spherical detector of Figure 9.1, is
found to be 0.15087 ± 0.00036. If we assume the beam intensity is one
288 Chapter 9
neutron per unit area we have a total of 1trm2 :::: 12.566 incident neutrons, or
1.8959 ± 0.0045 absorptions per unit beam intensity.
(9.37)
where 0 0 is the beam direction, J(E) is the particle current in the beam, and
8 is the Dirac delta function. The factor of 41t is required to convert the
beam current to an equivalent flux. That is, the neutron current is given in
terms of the angular flux by eqn 3.10. Since angular flux is specified per
steradian, the flux must be 41t times as large as the current in order to have
the correct number of particles streaming in direction 0 0 •
For this problem we can conveniently choose the surface A enclosing the
problem geometry to coincide with the detector outer surface. Since the
forward source is zero inside the detector, by eqn 9.32 we have
(9.38)
290 Chapter 9
The detector response (the adjoint source) is isotropic, and thus the adjoint
flux is symmetric and depends on no and n only through the scalar product
/-l = -no • n. We know that 'P+(/-l) is zero for the outgoing direction, or /-l <
O. Thus only the incoming forward flux contributes to eqn 9.38. For z > 0
we can define this as 'P(r,E,n,) = 'P(E)8(n-no), where Irl ~ rm and 'P(E) =
41tJ(E). Introducing this expressing into eqn 9.38, and using -no.ndA =
21trm2/-ld/-l, where /-l is the cosine of the angle between no and the unit inner
normal, gives
(9.39)
where 'P·(E,/-l) = 'P+(E,-/-l). Here 'P+ is the true adjoint flux and 'P. is the
flux estimated in a Monte Carlo calculation using an appropriate adjoint
source. That is, as discussed earlier, a change in the sign of the direction of
travel of an adjoint particle in a Monte Carlo calculation is required in order
for the Monte Carlo flux to match the true adjoint flux. The modified
adjoint function 'P. has a vacuum boundary condition in the forward flux
sense. The second integral, that over /-l, is equal to the total adjoint leakage
from the detector. Thus to determine the total response of the symmetric
detector using adjoint transport we will obtain the total leakage of
adjunctons from the system using the source S+(r) = La(r) and then multiply
this leakage by 41tJ(E). Because we are assuming the neutrons are
monoenergetic we can ignore the energy dependence of the cross sections.
It may be useful to consider a different interpretation of this adjoint
solution for a symmetric detector. Kalos3 has shown that a forward
calculation that scores the average reaction rate as a function of radius for an
isotropic source incident on the exterior of the detector will produce the
same total response as a parallel beam. For the calculation of the average
flux as a function of radius, an incident parallel beam of K particles/cm2-sec
can be replaced by an isotropic flux of K particles/cm2/sec over the entire
surface of the sphere. Using this result, the present adjoint solution could be
obtained by matching the adjoint leakage to an isotropic forward flux
incident over the entire surface of the detector. Thus the net leakage of
adjunctons through the surface can be equated to the net number of
absorptions in the detector due to a unit isotropic flux source on the surface.
In either case, to determine the response of our detector using adjoint
transport we need merely calculate the total leakage of adjunctons from the
system.
There are several ways to score the particle leakage from a system. The
most straightforward way is to tally the adjunctons as they actually escape.
Another way is to extrapolate every flight path to the exterior surface of the
detector and score an expectation leakage. A third way is to assume the
9. Advanced Applications of Monte Carlo 291
I 9.3, Ad··
Table 9.18. Modified Sub routmes fior Example lJomt Solution
Subroutine Location
' Source' Table 9.19
' Bdrx' Table 9.20
'Col' Table 9.21
'Stats' Table 9.22
The library version of subroutine ' Sph' is used to calculate the distance
'bdout' from the start location of the particle to the surface where the beam
is incident. Since the cross sections for both the absorbing and scattering
media are unity, the attenuation ofthe adjuncton from the source point to the
surface of the detector in the +Z direction is exp(-bdout) and, as indicated
above for scoring to a beam there is no 1/41tfo2 factor.
The modified subroutine 'Bdrx' is shown in Table 9.20. This subroutine
scores adjunctons that leak from the sphere. Subroutine 'Col' is shown in
Table 9.21. This subroutine scores a next-event estimate at each collision in
the moderator region along the expectation track toward the beam.
Collisions in the absorber region result in absorptions and the particle
292 Chapter 9
A calculation using this modified PFC with 106 start particles resulted in
1.8956 ± 0.0021 adjunctons leaking from the detector when scored using the
leakage to a surface source, and 1.8987 ± 0.0029 for the next-event
estimation to the beam. These adjoint results are in excellent agreement
with the forward result of 1.8959 ± 0.0045.
Consider the geometry used in Example 9.3, but with the following
modifications. First, the absorber is not a pure absorber, but has a non-
absorption probability of c = 0.8. Second, we wish to measure the flux at
three points inside the absorber region. These points are (0,0,0.5), (0,0,-0.5)
and (0,0.5,0). The first two points are located along the Z axis, at the center
of the beam incident on the positive-Z surface of the sphere, while the third
point is off the beam axis. All are equidistant from the center of the
detector.
A list of the modified PFC subroutines needed to solve this problem
using a forward calculation is shown in Table 9.23. Because we now have
point detectors, we will use three next-event estimators. Subroutine 'Col' is
shown in Table 9.24. In this subroutine, every collision results in a flux
estimate being made for each of the three detectors. The method for doing
this is described in Chapter 7. The calculations again take advantage of the
fact that the total cross section is everywhere unity in this example, and
therefore the attenuation is determined solely by the distance between
points. We should recall that the next-event estimator results will be suspect
and should be used with care. In order to use them with confidence in the
absence of other analysis it would be essential to examine the number of
collisions in the vicinity of each detector and to ensure that the relevant
phase space has been adequately sampled.
Table 9.24. Subroutine ' Col' for Example 9.4, Forward Solution
SUBroJl'INE COL 1
RE1\L(8) FLTRN,delta, tllp,pi
RE1\L(8) x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ ! 3
OCMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 4
RE1\L(8) cinfp,dtr,xsec,dcur ! 5
OCMMON/TRACK/cinfp, dtr, xsec, dcur 6
RE1\L(8) sigt(20),c(20); ! d:inensioos allow up to 20 different necIia 7
OCMMON/GEIM/sigt,c; !sigt is total cross sectioo, c is noo-absorptioo prcb 8
9. Advanced Applications of Monte Carlo 295
detector locations using a single adjoint run by using schemes similar to that
used here.
Adjoint
detector 1
i
Z
Beam from
+Z direction
--
• = Adjoint source point Beam from
• = Adjoint dcteector points +Y direction
Adjoint
detector 3
-
J--+-+--~~ Y
x
11 11 Beam from
-Z direction
Adjoint 'f
detector 2 ,
Figure 9.2. Geometry for Three Adjoint Source and Detector Points
Subroutine 'Col' is shown in Table 9.28. This routine accounts for the
weight loss that is now possible at each collision in the absorbing material,
and the use of three adjoint point detectors. Subroutine 'Stats' is shown in
Table 9.29. The uncollided contribution to each of the detectors is
calculated in this routine. The results and associated statistics are also
calculated for each detector.
The results of the present adjoint calculation along with those for the
forward calculation are presented in Table 9.30. Based on the differences in
run times for the forward (26 seconds) versus the adjoint (35 seconds)
calculations, a 16% improvement in the standard deviations should be
298 Chapter 9
expected. However, the adjoint results are more accurate than the forward
results by a factor of thirteen for the first detector and a factor of about
seven for the other two detectors. If the estimates of the standard deviation
for the forward calculation are assumed accurate it would take a forward
calculation fifty or more times the run time of the adjoint calculation to
match the standard deviations for the latter.
Table 9.27. Subroutine 'Source' for Example 9.4, Adjoint Solution
SUBlUJI'INE SOOFCE 1
REAL (8) x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ ! 2
OOMMON/PART/x,y,z,u,v,w,xo,yo,zo,uo,vo,wo,wate,age,energ,nzcur,newzn,ngroup 3
pi=2.OdO*DACCS(0.0d0) ! keep for xrore carplicated soorce that needs pi 4
x=O. OdO; y=O. OdO; z=O. SdO
nzcur=1 assunes origin is in zooe one 6
CALL Iscx::ur directioo chosen isotrcpica1ly 7
wate=1.OdO particle starts with a weight of ooe 8
REI'URN 9
END 10
The forward result for the first detector is slightly higher than the adjoint
result, and its standard deviation is relatively large. These results are
consistent with a collision having occurred close to that detector. The
forward results for the second and third detectors are below those of the
adjoint calculation, and the standard deviations are smaller than that of the
forward result for the first detector. In fact, the forward result for the third
detector is just over three sigma below the adjoint result. While this is
statistically possible, it would seem more likely to be a result of
undersampling in the vicinity of the forward point detector. All of this is
consistent with the performance of next-event estimators and confirms that
the results shown here for the forward calculation are not definitive. The
300 Chapter 9
00 L (r'O' E'~O E)
K(r' 0' E' ~O E) = fdE' fdO' 5 " , (9.40)
'" -00 41t L t (r, E')
For many problems of interest, the weight factor L/Lt is less than or equal to
one. Even in multiplying systems this factor is relatively small and the
9. Advanced Applications of Monte Carlo 301
(9.43)
The quantity in square brackets is the weight factor for adjoint scattering.
After the post-collision direction and energy are chosen from the first term
in the integrand of eqn 9.43, the post-collision adjuncton weight must be
modified by this factor. Unlike the weight factor in the forward collision
kernel of eqn 9.41, in many cases the weight factor in eqn 9.43 is much
greater than unity, even if the upper bound on the energy integral of the
normalization factor is set to some maximum value of interest. Furthermore,
the weight factor varies widely with the adjuncton energy and the type of
nucleus struck. As a result one finds the weight of the adjuncton fluctuating
widely. Because of these fluctuations, in many cases the simple option of
particle splitting is not an effective means of controlling the adjuncton
weights.
The reason for the difficulty in sampling the adjoint collision kernel can
be understood by considering the problem of selecting the type of reaction
that an adjuncton will experience at a collision. The reaction type, as well
as the angle of scattering, is dependent on the incoming particle energy,
which is an unknown quantity. The post-collision forward particle energy
and direction (i.e., the incoming adjuncton energy and direction) may have
been the result of many different reactions from many different incident
particle energies and directions. Selecting an incoming energy requires
302 Chapter 9
f
""
Vncr eff (En) = cr s(v rel)v rei Pr(v redEt) Pr(E t )dE t (9.44)
o
where Pr(vreIIEt) designates the probability of Vrel given an energy Et for the
target nucleus, and Pr(Et) is the probability of a target nucleus having an
energy ofEt. From Figure 9.3, by the law of cosines, a neutron moving with
velocity Vn that encounters a target nucleus with velocity V t will see a
relative scalar velocity of
Here J..lt = cos9 = OteOn is the cosine of the angle between the direction of
travel of the target nucleus, 0t> and that ofthe neutron, On.
target
Vr"~t
~~: •••• iln
neutron Vn "~
ilt
Figure 9.3. Interaction Between a Neutron and a Moving Target
A )~
13= ( 2kT (9.47)
Then, with the assumption that crlvrel) = constant = cro, and an isotropic
distribution of directions of travel of the target nuclei in the laboratory
coordinate system, we have
(9.48)
Thus the effective scattering cross section of eqn 9.44 can be written as
(9.49)
304 Chapter 9
(9.50)
(9.51)
or
(9.52)
2 x 2
With these equations one can sample the thennal motion of the target
atom. The integrand of eqn 9.51 is proportional to the probability of a target
nucleus having velocity x. This probability falls rapidly with increasing x
and, because the total probability of x> 3 is less than 0.00125, values of x
greater than 3 may generally be neglected. This is equivalent to neglecting
target energies greater than 9 kT. This assumption simplifies the sampling
of thennal scattering and, given the approximation of the overall model,
does not greatly impact the accuracy of the results.
The procedure for sampling thennal scattering is based on sampling the
response function for the collision density,
(9.54)
Given an energy En for the neutron entering a scattering event, the effective
cross section for such a collision, <JcM:En), can be calculated using eqn 9.52.
For Monte Carlo tracking purposes it is also necessary to select a specific
target velocity for the nucleus involved in the collision. This velocity, along
with the target mass and the neutron velocity, can then be used to calculate
9. Advanced Applications of Monte Carlo 305
(9.55)
(9.56)
where ~ is a random number. The cosine J.lt defines the half angle of a cone
around the direction of travel of the neutron, On = (u,v,w), from which the
target nucleus is approaching. By finding a random azimuthal point on the
cone of this half angle we can fix the direction of flight of the target nucleus,
which we will call (UI ,VI ,WI)'
Since the scattering is assumed isotropic in the center of mass, a new
random direction of flight of the neutron in the center-of-mass coordinates
(uo,vo,wo) is selected, and from this the post-collision neutron energy E' and
direction of flight in the laboratory system (u',v',w') are determined. The
derivation of these quantities requires an extension of the analysis of
Sections 4.1 and 4.2 because in thermal collisions both the neutron and the
target nucleus are assumed to be moving. c
C This derivation follows that ofE. D. Cashwell and C. J. Everett, A Practical Manual on the
Monte Carlo Method/or Random Walk Problems, Pergamon Press, New York, 1959, pp.
56ff.
306 Chapter 9
v = Vo +AVt (9.57)
em l+A
(9.58)
The diagram of Figure 9.5 also applies to the scattered nucleus as indicated;
however, we are concerned here only with the post-collision neutron
parameters.
The pre-collision neutron speed in the center-of-mass coordinates is
(9.60)
w:t
(9.61)
308 Chapter 9
Finally, from Figure 9.5, the post-collision neutron velocity in the laboratory
coordinates will be given by
,
Wo =W o + Wcm (9.62)
Using Wem = Vem, where Vem is given by eqn 9.57, and eqns 9.60 and
9.61 gives
W
o
=Iv '1 0
0 0
+ Vo + AVt
(l+A) (l+A)
=(l+A)
AVrel 0
0
+ Vo + AVt (9.63)
(l+A) (l+A)
(9.64)
(9.65)
(9.66)
(9.67)
(9.68)
(9.69)
9. Advanced Applications of Monte Carlo 309
Then from the definition ofR (eqn 9.66) the energy of the neutron following
the collision is
example. The value for x was calculated using rejection over the range 0 <
x < 4. Although in practice 0 < x < 3 is usually sufficient, the broader range
was used in this example in order to improve the agreement with the
theoretical results.
Table 9.32. Subroutine 'Therm' and Function 'Erf for Example 9.5
~ Thenn(a,e,temp,u,v,w)
RFAL(8) fltm
~ bol/8.617342e-5/,pi/3.14159265/ ! bol=Boltzmann canst
alpha = SCJ{['(a*e/temp/bol)
sigrat = (1.+I./2./alpha**2)*erf(alpha)+EXP(-alpha**2)/alpha/I.77245385
I xl = 4.*fltm() ! 'x' of eqn 9.48
x2 = ABS(alpha-xl) **3; x3 = (alpha+xl)**3-x2
x4 = O.45304*xl*x3*EXP(-xl**2)/alpha**2/sigrat
IF(x4-fltm() .It.O.) ro ro I
x7 = xl/alpha ! target velocity
x5=(alpha+xl) **3
rand=fltm()
xmut=(alpha**2+xl**2-(xS+rand*(x2-xS»**(2./3.»/2./alpha/xl ! mu of eqn 9.55
IF(ABS(xmut) .It.1.) ro ro 99
xmut=xmut*O.999995
IF(ABS(xmut) .It.1.) ro ro 99
WRITE(*,*)e,alpha,ac, sigrat,xl,x2,x3,x4,xS,x6,x7,rand,xmut
S'roP I
99 b = SCJ{['(I .-xmut**2)
rand = fltm(); c = COS(2.*pi*rand); d = SIN(2.*pi*rand)
IF(ABS(w).lt.O . 999) ro ro 2
sqrtu = SQRr(I.-u*u); wt = b*(c*u*w-d*v)/sqrtu+xmut*w
vt = b*(c*u*v+d*v)/sqrtu+xmut*v; ut = xmut*u-b*c*sqrtu
ro ro 3
2 sqrtw = SCJ{['(1.-w*w); ut = b*(c*w*u-d*v)/sqrtw+xmut*u
vt = b* (c*W*v+d*u) /sqrtw+xmut*v; wt = xmut*w-b*c*sqrtw
3 wO = 2*fltm()-1. ! get isotrcpic unit vector
sinth = SQRr(1.-wO*wO); Iill = 2. *pi*fltm()
uO = sinth*COS(Iill); vO = sinth*SIN(Iill); yl = (1.+x7*(x7-2.*xmut»
IF(y1.gt.O.) ro ro 98
WRITE(*,*) x7, xmut, yl
S'roP 2
98 yl = SQRT(yl); y3 = u+a*(uO*yl+x7*ut); y4 = v+a*(vO*yl+x7*vt)
y5 = w+a*(wO*yl+x7*wt); y6 = y3**2+y4**2+y5**2
e = e*y6/ (1. +a) **2 ! ootgoing neutroo energy and directioo.
y2 = l./SQRr(y6); u = y3*y2; v = y4*y2; w = y5*y2
RE:ruP.N
END
FUOCI'ICN ERF(X)
IF(x.gt.l.5) ro ro I
erf = x-x**3/3.+x**5/10.-x**7/42.+x**9/216.-x**11/1320.+x**13/9360.
erf = erf*1.12837917
RE:ruP.N
1 erf = 1.-.5/x/x+.75/x**4-1.875/x**6+3.281/x**8-7.383/x**IO
erf = 1.-EXP(-x*x)/x/I.77245385*erf
RE:ruP.N
END
where the function is normalized so that the integral over all energies gives a
unit flux.
If eqn 9.71 is differentiated with respect to E and the result set equal to
zero, it can be seen that the maximum flux occurs at an energy of kT.
Multiplying eqn 9.71 by the energy, and performing the integration to find
the average energy for the flux gives a result of2kT. This calculation of the
average energy will be used to help confirm the accuracy of the current
Monte Carlo results.
The results obtained for neutron spectra in hydrogen for three different
temperatures, again using 106 source particles, are plotted in Figure 9.6.
The uncertainties shown do not measure all sources of error; for example,
they do not include a consideration of whether sufficient collisions have
9. Advanced Applications ofMonte Carlo 313
15 r------------------------------.
10
w
:g
w
f 5
Exercises
Slab
Adjoint _
j o Adjoint
I
leakage source
3. Example 9.5 assumed all source particles were born with an energy of
0.025 eV and their energy was scored after 20 collisions. Consider the
following variations on that calculation.
a. Repeat the example calculation using the technique of tracking a single
neutron through many collisions.
b. Use the technique of following many neutrons and scoring the energy
of each after some fixed number of collisions, but choose several
different start energies. Then, instead of using a fixed start energy, vary
the start energies within a single calculation. Did the results change?
c. Using the same parameter values as those used in the example,
examine the effect of changing the number of collisions a neutron is
allowed to undergo before scoring its energy. Is there an optimum
number of collisions that will minimize the run time of the calculation
while guaranteeing an equilibrium spectrum?
9. Advanced Applications ofMonte Carlo 315
I See George I. Bell and Samuel Glasstone, Nuclear Reactor Theory, Van Nostrand Reinhold,
New York, 1970, Section 2.7, for the effect of the change of variables on the one-speed
transport equation.
2 This manipulation is described in numerous places and the details are not reproduced here.
See ibid., pp. 257-58; James H. Renken, "Use of Solutions to the Adjoint Transport
Equation for the Evaluation of Radiation Shield Designs," Sandia Laboratories,
Albuquerque, NM, Report SC-RR-70-98, January 1970; G. E. Hansen and H. A.
Sandmeier, "Neutron Penetration Factors Obtained by Using Adjoint Transport
Calculations," Nuc Sci Eng 22,1965, pp. 315-320.
3 M. H. Kalos, Appendix G of "ANTE 2, Adjoint Monte Carlo Time-Dependent Neutron
Transport Code in Combinatorial Geometry," Mathematical Applications Group Inc.
(MAGI), White Plains, NY, 1970. See also Hansen and Sandmeier, op. cit.
4 See Bell and Glasstone, op. cit., pp. 604-5.
S Ibid.
Appendix
Random Number Generators
a "It is the almost universal practice to use, as 'random generators,' devices ... which do not
even pretend to have more than a trace of true randomness. ... This would be very
discouraging, were it not for the fact that, when 'random numbers' are used in practice, we
generally require only a few of the properties of randomness, and all others are immaterial."
1. H. Halton, "A Retrospective and Prospective Survey of the Monte Carlo Method," SIAM
Review 12, 1970. pp. 5-6.
317
318 Appendix
• The period should be long (i.e., such a generator should produce many numbers
before repeating its sequence such that no portion of the string is reused in a
calculation),
• The numbers produced should tend to be equidistributed (i.e., a string of several
hundred or more numbers should tend to be uniformly distributed over the
interval of interest),
• The numbers should be uncorrelated (i.e., each number in the sequence should
be statistically independent of, or uncorrelated with, the previous numbers), and
• The algorithm should have rapid accessibility (i.e., the computational time
devoted to obtaining the random numbers should be small).
Interestingly, a high degree of equidistribution is not usually a feature of
true random number strings. Thus a truly random string of numbers may not
be as effective in Monte Carlo as applied to particle transport, where a
thorough sampling of the important regions of phase space is necessary in
order to obtain a correct answer, as a pseudorandom, equidistributed string.
Deliberately equidistributed strings of "random" numbers are called quasi-
random and a special discipline has been developed to address such random
number strings.
One might anticipate that complexity in a random number generator
would be more likely to ensure randomness than simplicity. As a result a
number of complicated, multi-step algorithms have been proposed for
generating random number sequences. However, complication appears to
offer no guarantee of randomness and generally incurs high costs in
computing requirements. Some extremely complicated algorithms that have
been postulated have been found to make extraordinarily bad random
number generators.
Users frequently are tempted to "tinker" with random number generators
in an attempt to improve them. However, unless the tinkerer is quite
knowledgeable in the arcane art of tuning random number generators, this
almost always results in reduced performance and can produce very bad
results. Bad random number generators are easy to come by; good ones are
not.
Random number generation on a digital computer often involves the use
of remainders, or low-order bits, extracted from the results of various
mathematical procedures. In the simplest case each random number Xn is a
function only of the previous random number xn-,:
(A.I)
This simple formulation certainly meets one of the requirements for random
number generators listed above - it allows the user to select a start random
number, XQ, and be assured that the resulting sequence will always be
Random Number Generators 319
Here a and c are integers and mod(l;,,,) designates the modulo function; i.e.,
the remainder after subtracting from l; the largest integral multiple of" that
is less than or equal to l;. Random numbers Xn in [0,1) are obtained from the
Yn by the normalization .
(A.3)
The starting point for the sequence of eqn A.2 is Yo, which may be set by the
user for each calculation. The quantity Yo is sometimes called the start
random number, or the seed, even though the actual start random number is
yolM.
It is obvious that in the string of numbers produced by eqn A.2 each
number can appear once and only once before the string repeats. Therefore
the period of a linear congruential random number generator - the length of
the sequence generated before the starting number is repeated - is per(xn) ~
M. That is, in the best case of eqn A.2 the Yn will take on all values in the
range [O,M-I], after which the sequence will be repeated. A generator that
meets this best-case requirement is known as a purely periodic random
number generator. For a linear congruential random number generator the
maximum period depends on the computer word length. The largest period
for a 16-bit machine in single precision is 2 16 = 65536. A 32-bit machine in
single precision, or a 16-bit machine in double precision, will extend the
maximum period to 232 ~ 4.29xl09.
Conditions that guarantee per(xn) = M are known. Specifically the linear
congruential sequence of eqn A.2 has the period M if and only if
a) c is relatively prime to M; i.e., c and M have no common prime factors
greater than I,
b) a-I is a multiple of p, for every prime p dividing M, and
c) a-I is a multiple of 4 if M is a multiple of 4.
Selecting coefficients for eqn A.2 that guarantee per(Yn) = M is no guarantee
that the random numbers that result from the algorithm will be "good." For
320 Appendix
It is apparent that for this example the choice a = 1 is not very "good."
For each of the values of c given in the table the period of the sequence is
indeed equal to M, but the "randomness" of the sequence leaves a great deal
to be desired. A "better" sequence can be obtained for large a. For
example, for a = 9, c = 11, and yo = 0, we obtain the sequence {O, 11, 14,9,
12, 7, 10, 5, 8, 3, 6, 1, 4, 15, 2, l3, O}. This sequence begins to "look"
somewhat better than those in Table A.I. However, as has been pointed out
by many authors, the virtue ofa sequence of "random" numbers, like beauty,
is in the eye ofthe beholder. 2
Random Number GeIJerators 321
congruential or not, is that the Xn are rational fractions and the range (0,1) is
sampled discretely rather than continuously. A second limitation of the
linear congruential generator is that the sequence of geometric points
defined by the adjacent random numbers Xn produced by such a generator lie
in hyperplanes. For example, in 3-dimensional space eqn A.2 will always
produce sets of points {XI ,X2 ,X3}, {X2 ,X3, '4}, {X3, '4, xs}, etc, that lie on a
finite set of planes in three-dimensional space. This phenomenon is called
the lattice structure of the generator.
For k-dimensional space there are at most Milk such planes on which the
points lie and, depending on the choice of parameters, the points may lie on
many fewer planes than this. The separation between these hyperplanes has
a minimum value depending on the specifics of the generator. Even for
values of M, a, and c that produce full period sequences, eqn A.2 tends to
produce points that lie on widely spaced planes. Studies of the lattice
structure produced by various combinations of linear congruential
parameters have shown that some combinations are significantly better than
others; i.e., that the distance between their lattice planes is close to the
minimum value.
With the help of Fishman, Park and Miller have defined what they refer
to as a "good, minimal standard random number generator" with c = 0. 4 For
a computer word length of 32 bits they recommend a multiplicative
congruential generator with M = 23L} = 2147483647, and a = 75 = 16807.
The lattice structure for this generator does not quite meet the criterion,
recommended by Fishman, that the spacing between the lattice planes be
less than 1.25 times the minimum, but the lattice spacing for this generator
is within about 1.3 times the minimum. Suggested alternatives for a are
39373, 48271, and 69621. L'Ecuyer has tested this "minimal standard
random number generator" and found that it failed six of ten tests for
randomness. b However, the tests were stringent and the fact is that the Park-
Miller random number generator is of good quality.
Implementing a program . for using eqn A.2 that does not exceed the
integer capacity of a 32-bit machine requires avoiding the multiplication a x
Yn. This can be done by performing the modulo process prior to the
multiplication, as in the method of Schrage. 5 In this method we factor Minto
M =qa+r (A.4)
where
b Pierre L'Ecuyer, "Efficient and Portable Combined Random Number Generators," Comm
ACM 31, 1988, pp. 742-774. L'Ecuyer also tested a = 39373 and found it to be a good
choice. The other two choices for a suggested by Park and Miller have not been tested
thoroughly but appear to be as good or better than 39373.
Random Number Generators 323
q = div(M,a) (A.S)
and
r= mod(M,a) (A6)
Here div(S,T) is the largest integer that, when multiplied by T), gives a result
that is less than or equal to S. We note that q ;?: I and I ::;; r ::;; a-I, when a <
M is relatively prime to M.
Let us rewrite eqn A2, with c = 0, as
Substituting this into eqn AS, and using qa = M - r from eqn AA, gives
or
and
It can be shown that for r < q, and for all y in {1, 2, ... , M-I}, that
(1) B(y) is either 0 or 1
(2) both amod(y,q) and rdiv(y,q) are in {O, 1,2, ... ,M-I}
(3) ly(Y)1 :$; M-I
324 Appendix
C The authors wish to thank Lester Petrie of the Oak Ridge National Laboratory for his
assistance in suggesting and implementing this random number generator.
Random Number Generators 325
of the answer. Thus if the calculation is broken into n pieces, the ith piece
of which provides the intermediate mean Vi, with variance Var(Vi), then the
best estimate of the true mean is
The smallest number that can be returned by ' fitm' is Xmin = [231_1]-1 ~
4.66 x. 10-10• Every number returned will be an integral multiple of this
value, and the largest number returned will be l-Xmin. That is, the random
numbers will be selected from a discrete grid determined by the specifics of
the generator and the word length of the computer being used. The values
326 Appendix
of a random variable other than those defined by such discrete grid points
cannot be accessed. Furthermore, every number available to the generator
will be used once and only once during each period of the generator. Thus
'fltrn' is an exclusive generator; i.e., a number once selected is discarded
from the remainder of the numbers from which the next number in the
sequence will be selected. These constraints clearly bias the generator since
a truly random string should be continuous, and the probability of selecting
a particular number should be constant regardless of the values already
selected.
To avoid the risk of encountering long-range correlations in the number
sequences produced by pseudorandom number generators, it is frequently
recommended that only a small part, typically no more than 10%, of the
period of a random number generator be used in a given Monte Carlo
calculation. The period of 'fltm' is approximately 2.1475 x 109• This
period is acceptable for the simple examples presented in this book but is
not adequate for calculations in which many particles are tracked or in
which many random numbers are used per start particle. Thus it should be
considered at best marginal for production use.
Fortunately various algorithms are available that enable one to obtain a
period for a random number generator that is much greater than 213, where J3
is the word length of the computer on which the generator is used. Such
algorithms include the multiple-recursive congruential method, the explicit
inversive congruential technique, and various shift-register methods. An
alternative formulation of the Park-Miller generator that has been combined
with a Marsaglia shift sequence is presented by Numerical Recipes in
Fortran 90: The Art of PARALLEL Scientific Computing. 6 This shift-
register version of the minimum standard generator has a period of about 3.1
x 10 18• The Mersenne Twister, developed by Matsumoto and Nishimura,
has a reported period of 219937_1.7 The interested reader can find information
on other random number generators in the books and articles cited in the
bibliography.
1 John von Neumann, "Various Techniques Used in Connection With Random Digits," Monte
Carlo Method, A. S. Householder, G. E. Forsythe, and H. H. Germond, eds., National
Bureau of Standards Applied Mathematics Series 12, U. S. Government Printing Office,
Washington, D.C., 1951, p. 36.
2 William H. Press, Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling,
Numerical Recipes in C. The Art o/Scientific Computing, Cambridge University Press, New
York, Cambridge, 1988, p. 204.
3 D. H. Lehmer, "Mathematical methods in large-scale computing units," Annu. Comput. Lab.
26, Harvard Univ., 1951, pp. 141-146.
4 Stephen K. Park and Keith W. Miller, "Random Number Generators: Good Ones are Hard
to Find," CommACM31, 1988,pp. 1192-1201.
Random Number Generators 327
The present volume has only touched on the wide range of techniques
and applications amenable to Monte Carlo methods. There are many books
available to explain and elaborate on various aspects of using Monte Carlo
methods to perform radiation transport calculations, and the interested
reader in encouraged to pursue those areas of particular interest or otherwise
appropriate to a specific area of specialty. Technical journals frequently
publish papers that describe various applications of Monte Carlo and report
the latest research and developments in the field. Finally, many technical
reports from laboratories and industry contain items of interest.
Listed below are a few references that the reader might find useful for
considering alternative approaches to a particular topic, for providing
elaboration on areas of interest, or for extending study into additional areas.
The early references, which were mostly published prior to 1980, remain
excellent sources of information on Monte Carlo. However, they tend to be
limited in scope compared with recent publications. Therefore a sampling
of both old and recent books are cited. Finally, state-of-the-art Monte Carlo
transport capability exists in the production codes that are in use by the
community at large. These codes are usually described in users' manuals
that are provided with the software. Only a sampling of such users'
manuals, both old and new, is provided here and the actual list of codes
available is much larger than that shown.
The production codes, and the latest editions of the users' manuals, may
be obtained from the organizations sponsoring the development of the
codes, or from the Radiation Safety Information Computational Center at
the Oak Ridge National Laboratory, Oak Ridge, Tennessee. This Center is
supported by the U. S. Department of Energy and other organizations for the
purpose of compiling and distributing software related to radiation safety,
transport, and shielding. The Center is a convenient source for obtaining all
manner of radiation analysis software and documentation, as well as
practical help in implementing the software on various computers.
329
330 Bibliography