Kilby 2021

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Nuclear Inst.

and Methods in Physics Research, A 1001 (2021) 165236

Contents lists available at ScienceDirect

Nuclear Inst. and Methods in Physics Research, A


journal homepage: www.elsevier.com/locate/nima

Comparison of a semi-analytic variance reduction technique to classical


Monte Carlo variance reduction techniques for high aspect ratio pencil beam
collimators for emission tomography applications
Seth Kilby a , Jack Fletcher a , Zhongmin Jin a , Ashish Avachat a , Devin Imholte b ,
Nicolas Woolstenhulme b , Hyoung K. Lee a , Joseph Graham a ,∗
a Missouri University of Science and Technology, 301 W 14th St. Rolla, MO 65409, United States of America
b
Idaho National Laboratory, 1955 N. Fremont Avenue, Idaho Falls, ID 83415, United States of America

ARTICLE INFO ABSTRACT


Keywords: A semi-analytic variance reduction technique developed for collimated gamma emission tomography problems
Variance reduction was compared to classic Monte Carlo variance reduction techniques within the Monte Carlo N Particle Trans-
Monte Carlo N Particle Transport (MCNP) port (MCNP) code. In the semi-analytic technique, a computationally efficient, non-analog, monodirectional
Emission tomography transport
source biased Monte Carlo simulation is first performed. Analytical expressions or empirical values are then
used to correct for solid angle and field-of-view effects introduced by the non-analog source definition. This
variance reduction technique was compared with deterministic transport sphere (DXTRAN) and geometry
splitting variance reduction schemes to determine the accuracy and computational savings of each technique
relative to an analog pulse height tally (F8 tally) at 1, 3, 5, and 10 mm collimator aperture radii. For radii
0.5 mm and 0.3 mm a DXTRAN sphere was used in place of an analog F8 tally, due to large particle history
demands, to analyze the accuracy of the semi-analytic variance reduction technique. The computational savings
and accuracy were evaluated for six to seven photopeaks depending on the method used. The monodirectional
source biasing technique overestimated the count rates by approximately 9%–19% when the radius is less than
3 mm, but the technique overestimated by a factor of 2 to 7, when the radius is greater than or equal to 3 mm.
The monodirectionally source biased technique offered computational saving factors on the order of 108 -1013
over 1-10 mm collimator radii studied. DXTRAN and geometry splitting methods yielded higher accuracy, but
computational savings range from approximately 0.13 to 2.2 and 0.07 to 2.9, respectively indicating marginal
improvement at best.

1. Introduction on the exiting side of the collimator. The purpose of this work is to use
several variance reduction techniques to transport gamma rays through
1.1. High aspect ratio pencil beam collimators for gamma ray emission a narrow collimator and compare the rates of convergence of each
tomography technique to that of fully analog simulation. This work was motivated
by a project to design a submersible gamma tomography system to
Photon collimators have many applications in the fields of medicine examine next-generation nuclear fuels [2]. Gamma ray tomography is
and radiation imaging [1]. Collimators allow for control over beam a technique that has been utilized to examine full scale pressurized
shape and width, and consequently facilitate imaging and reduce ra- water reactor and boiling water reactor fuel assemblies. For large scale
diation dose to a specific person or object. Gamma ray collimators emission tomography problems such as those conducted at the Halden
work by defining a small solid angle from which isotropically emitted Reactor Project in Norway or the Forsmark Nuclear Power Plant in
gamma rays are sampled. The inherent difficulty in modeling narrow, Sweden, pencil beam collimators are commonly used in conjunction
high aspect ratio collimators using MC radiation transport is that only a with high purity germanium (HPGe) detectors. These systems use the
small fraction of particles is likely to pass through the collimator. This strong material attenuation of tungsten or lead with the high energy
proportion is comparable to the ratio of the collimator solid angle to the resolution, typically less than 1% at 662 keV, of an HPGe detector [3–
total solid angle. Thus, the smaller the collimator, the fewer the number 5]. However, large scintillator crystals coupled with photo-multiplier
of particle histories that contribute non-zero weights to tallies placed tubes can be used instead of HPGe detectors to provide greater counting

∗ Corresponding author.
E-mail address: grahamjose@mst.edu (J. Graham).

https://doi.org/10.1016/j.nima.2021.165236
Received 9 June 2020; Received in revised form 11 February 2021; Accepted 11 March 2021
Available online 19 March 2021
0168-9002/© 2021 Elsevier B.V. All rights reserved.
S. Kilby, J. Fletcher, Z. Jin et al. Nuclear Inst. and Methods in Physics Research, A 1001 (2021) 165236

efficiency at the cost of energy resolution. Nonetheless, these systems a given radiation transport problem. While arguably a more accurate
are difficult to model as they generally require thousands of detector representation of the physics, the time required for a well-converged
projections to obtain a well-constructed tomograph. Thus, it is not a solution is larger than that of typical deterministic methodologies, and
particularly efficient use of computational resources to model detector as a result, certain problems can be computationally prohibitive. This
signal using Monte Carlo N Particle Transport (MCNP) or GEANT4 phenomenon forces users to choose between efficiency and accuracy
in low particle count regions such as those that are created due to in their runs. It is possible to compromise, however, with the use
collimation [3–12]. It is common to rely on numerical techniques, such of variance reduction techniques in conjunction with MC to reduce
as ray tracing matrices, at the expense of some degree of accuracy, as the computational time and resources needed to converge, within a
the user makes assumptions about either the attenuating media or the specified certainty, to a solution.
particle interactions throughout the problem. For emission tomography The primary disadvantage of variance reduction is the ability of the
problems that analyze a fuel assembly or even a single fuel pellet, technique to give solutions that are misrepresentative of the physics
the associated modeling challenges consist of low-sampled activity and of the problem. Variance reduction techniques are non-analog; that
high geometric and material attenuation. Low-sampled activity tends is to say, they introduce some non-physical mechanism (e.g. splitting
to lead to longer acquisition times, and smaller fuel geometries force one particle into two half particles) to accelerate statistical conver-
users to utilize smaller collimator diameters to improve spatial resolu- gence [13,15]. In order to produce a result with a physical meaning,
tion, which further increases geometric attenuation. The necessity for they must also apply a procedure for correcting the running average in
efficient Monte Carlo (MC) or deterministic methodologies for these question. Additionally, while variance reduction techniques can help
high aspect ratio transport problems becomes apparent to acquire an bridge the gap between MC and deterministic approaches, variance
efficient simulation without sacrificing accurate results. reduction can only reduce computational cost by a given amount that is
reflective of the method utilized, problem geometry, and the transport
1.2. Monte Carlo vs deterministic techniques physics.

In radiation imaging system design, radiation transport modeling 1.3. Monte Carlo variance reduction techniques
can be used to predict detector response and performance before phys-
ical prototyping begins. Within radiation transport modeling, the tech- Within the MCNP package, there exist global variance reduction
niques that are primarily used are deterministic, MC, or some combi- techniques that users enable in order to increase the precision of tally
nation of the two. Deterministic techniques focus on solving systems statistics. As mentioned previously, these techniques can be split into
of equations that govern the mean properties of the particles being various categories, and the techniques of focus will be those that can be
transported while MC techniques calculate the stochastic trajectories applied to an F8 pulse height tally. It should be noted that the weight
for a vast number of individual particle histories. Deterministic tech- window generator was designed for non-F8 tallies. The generator esti-
niques are generally faster than MC based techniques at calculating mates the importances of single particles within phase space, and the
statistical properties. Deterministic equations predict average particle generator cannot estimate the importance of a collection of particles in
behavior such as average energy, average cross section, average flux phase space. Therefore, within MCNP, the weight window generator is
or grouped probability distributions. Discrete ordinate techniques, the incompatible with an F8 tally [14].
most common of the deterministic techniques, rely on phase space Perhaps the simplest applicable variance reduction technique is ge-
discretization. Phase space discretization could be thought of as the ometry splitting and roulette, which is a population control method. In
segmentation of phase space into boxes of decreasing size toward the geometry splitting and roulette, a geometry is subdivided into smaller
region of interest. As a result, discrete ordinate techniques can provide pieces with different importance values in each cell [13,16]. This
accurate information regarding parameters such as the flux in the allows for particles to be virtually split, which increases the number of
system throughout all of phase space [13]. Consequently, determin- potential particles impinging on the tally. For example, if a geometry is
istic techniques are simpler and place less demand on computational divided into two sublayers, as a particle crosses from the layer closest
resources. However, a deterministic approach generally requires the to the source into the layer closest to the tally surface, it is split into
selection of a suitable averaging, grouping or differencing scheme prior two particles with identical velocity, but each with a fraction of the
to solution. original particle’s weight. If particles are successively split as they
MC, in contrast, is a powerful technique that can statistically trans- approach a tally surface, the total number of histories contributing
port particles through user-defined geometries and media. MC methods to a mean can be multiplied, thereby reducing the variance on the
rely on simulating individual particle tracks and recording, in a tally, mean. In the case of roulette, a particle travels from a cell of higher
their average behavior. The nature of the particles in the system can importance to one of lower importance, and as a result, this particle has
be inferred using the central limit theorem from the average behavior a probability to roulette. This process does not change the tally mean
of the simulated particles that are recorded in tallies assuming large due to the internal weight adjustment. The result is either an increased
enough particle histories [14]. MC methods tend to be inefficient in quantity of particles within a cell, in the case of geometry splitting, or a
comparison to their deterministic counterparts as most computational reduction in tracked particles not likely to interact with the tally. This
resources are utilized tracking particles that are not likely to contribute is generally considered a safe variance reduction technique because
to the solution. Compared to deterministic techniques, however, there weight is conserved while the particle count increases or decreases.
are two main advantages of MC transport. First, MC techniques have a For example, if a photon transports from a cell of importance 1 to
high degree of efficacy for transport within more complex geometries, importance 3 the number of branches that are created from that split is
which implies that they provide greater spatial accuracy for a given 3, while the weight of each branch is one third. Splitting occurs when
complex transport geometry. Second, they are generally not bound by a particle moves from a lower importance cell to a higher importance
strict energy grouping; as a result, the user may utilize continuous cell. If the particle moves from the cell of importance 3 to the cell of
energy transport [15]. The coupling of continuous energy and complex importance 1 then the particle has a 33% probability to survive the
3D geometries with time allows for a detailed representation of the transport or a 67% probability to roulette. If the particle survives the
transport problem. Since MC does not use increasingly discretized winnowing process, the weight is readjusted to 1. If R is any ratio of
phase space boxes such as those used in discrete ordinance methods, importances between two consecutive cells, then a general form can be
there is no averaging required a priori in phase space. Realizing such stated as seen in Eq. (1).
solutions, however, requires increasingly powerful computational re- 𝐼2
sources or utilizing methods to reduce the convergence statistics of 𝑅= (1)
𝐼1

2
S. Kilby, J. Fletcher, Z. Jin et al. Nuclear Inst. and Methods in Physics Research, A 1001 (2021) 165236

If R is greater than 1 a split occurs, and if R is less than 1 a roulette will then travel as if it had not been absorbed, but with an adjusted
can occur. The summed constituent weights must be conserved during weight, multiplied as in Eq. (3),
a split or roulette. While this is relatively simple concept if R is an ( )
𝜎
integer, an integer number is not required. If R is a non-integer greater 𝑊𝑓 = 𝑊𝑖 1 − 𝑎 (3)
𝜎𝑡
than 1, the split is determined probabilistically. For example, if R is
2.5 then the particle is split into 3 with an occurrence probability of where 𝑊𝑓 is the final adjusted weight, 𝑊𝑖 is the initial weight, and
𝜎𝑎
50% and 50% occurrence probability of being split into 2. If 2 and 3 𝜎𝑡
is the ratio of the absorption cross section to the total cross sec-
are floor and ceiling values respectively and weight is conserved along tion [19,20]. This allows for the weight to be properly adjusted to
each branch of a split, the weights are still conserved. Since weight is compensate for losses associated with absorption. This technique is
internally conserved among each of the split branches, this technique useful in problems where strong material attenuation causes absorption
is simple to apply, and it offers the user a great deal of reliability in events.
the resulting accuracy of the simulation. Another population control The last set of global variance reduction schemes are partially
method related to geometry splitting and roulette is weight windowing. deterministic methods including DXTRAN spheres. This is a weight-
Within the F8 pulse height tally framework, it is not recommended independent technique that allows the user to improve sampling within
to utilize the inbuilt weight window generator [13,16]. However, it is an unlikely region of phase space by applying a particle split, in con-
feasible to simulate the weighting value. This is not a trivial process, junction with a split transport. When an interaction occurs, a particle
and generally requires some initial weight window operations such as is split into two tracks: one continues transporting in the direction
using a deterministic approach to determine the importance function dictated by the scattering physics (the non-DXTRAN particle), while the
as a function of space, time, and energy. Programs such as ADVANTG other (the DXTRAN particle) travels directly toward an user-specified
and other adjoint deterministic techniques can provide detailed deter- spherical surface surrounding the unlikely region of phase space, usu-
ministic importance functions that estimate the optimal weight window ally a tally region. The DXTRAN particle only interacts upon entering
for a given transport problem [17,18]. It should be noted that the the sphere, thereby assuring that particles reach the desired region,
weight windowing method can be applied to all elements of phase space independent of the interaction. The DXTRAN particle is the uncollided
whereas geometry splitting is limited to spatial dimensions. fraction of the original particle, while the non-DXTRAN particle is the
Another category of variance reduction is modified sampling tech- collided fraction of the original particle, although it should be noted
niques. Within this category are the specific techniques of forced col- that in this method, weight is not conserved: since the collided particle
lisions, source biasing, implicit capture, and exponential transform. continues with the same weight it was originally assigned, the total
These techniques generally modify the way MCNP scores a particle. In weight of the interaction is greater than the initial starting weight at
the case of exponential transforms, the path length in a preferred di- the collision location. This is due to how the weight is assigned for the
rection is stretched between collisions by adjusting the total interaction DXTRAN particle. The DXTRAN particle has a weight that is described
cross section as seen in Eq. (2), in Eq. (4),
𝑃 (𝜇) −𝜆
𝛴𝑡∗ = 𝛴𝑡 (1 − 𝑝𝜇) (2) 𝑊𝐷𝑋𝑇 𝑅𝐴𝑁 = 𝑊𝑖 𝑒 (4)
𝑃𝑎𝑟𝑏 (𝜂)
where p is the stretching parameter and 𝜇 is the cosine of the angle be- Where 𝑊𝐷𝑋𝑇 𝑅𝐴𝑁 is the weight of the DXTRAN particle. 𝑊𝑖 is the
tween the direction of the particle and the stretching direction [13,19]. original weight. 𝑃𝑃 (𝜇) is the ratio of a probability density function
𝑎𝑟𝑏 (𝜂)
This has the benefit of amplifying particles within highly attenuating dependent on the cosine of the angle between the scattering direction
media, such as in deep shielding problems, which can be useful in and incoming direction to an arbitrary probability density function, and
determining the photon penetration in a collimator. This technique 𝑒−𝜆 is the negative exponential of the optical mean free path. However,
requires weight windowing to adjust for the stretching that takes place. to make up for this weighting issue, the DXTRAN particle has zero
Source biasing, in contrast, allows the user to change how the transport weight for all tallies outside of the DXTRAN sphere. The non-DXTRAN
is sampled. For example, a source could be biased in a preferential particle has a standard weight for all tallies outside the DXTRAN sphere
direction or biased in terms of the probability of emission for a specific and will not impinge in a tally within the DXTRAN sphere if the
particle at a given energy. The location of the source itself can also be particle reaches the sphere on the next flight. It is also advantageous
biased. Source biasing allows the user to take advantage of nonanalog to view DXTRAN spheres not only as mechanisms to increase sampling
processes to decrease the variance per particle while increasing the efficiency in regions where particles are unlikely to transport, but also
figure of merit, which implies a more efficient simulation. It is rather to shield high-weight particles from impinging upon a tally. Nested
intuitive, for example, that a source directionally biased toward a tally DXTRAN spheres may be utilized to control weight fluctuations from
region would have a higher sampling efficiency, in comparison to an collision events occurring near the sphere boundary; with the addition
analog simulation of an isotropic source. Source biasing, however, of more DXTRAN spheres, the user gains greater control over the
usually requires a correction to account for degrees of freedom removed particle weight and sampling in the area of interest [20,21].
from the random sampling process; in this case, direction has been thus The aforementioned variance reduction techniques can be utilized
affected. In an input file within MCNP, this manifests itself as a source to lower tally variance without distorting the tally mean. For this
biasing card that should correct for the differences in sampling, though manuscript, an analytical source-biased variance reduction technique
analytical corrections for this effect are possible as well. are compared in MCNP to geometry splitting, DXTRAN spheres, and
Forced collisions are another type of variance reduction method that analog simulations of gamma ray transport through pencil beam col-
alter the mean free path of the particle. Forced collisions force particles limators with varying radii. The authors intend to provide the reader
to undergo an interaction if they enter a given cell. The particle and with a notion of various tradeoffs between accuracy and efficiency us-
the associated weights split into two subcomponents, collided and ing different variance reduction techniques for high aspect ratio pencil
uncollided. This method is generally useful in creating source inputs beam collimator problems. As a final note to the reader, the purpose
for DXTRAN spheres, point detectors, or ring detectors. Implicit capture of this work is to provide a foundation to view variance reduction
is a technique that allows a particle to continue transporting even techniques independently. Combining variance reduction techniques
after colliding in a material. When a particle interacts in a medium, may provide smaller tally variance, yet the time required to implement
all potential interactions are recorded with relevant probabilities, as them can in many ways be an unoptimized use of the time if the
opposed to exclusively undergoing an absorption event. The particle problem could be more easily served by applying a simple technique.

3
S. Kilby, J. Fletcher, Z. Jin et al. Nuclear Inst. and Methods in Physics Research, A 1001 (2021) 165236

2. Methodology Within each of the split cells, the importance ratio defined in Eq. (1)
increases by a factor of 2 as the particles transport through the cells,
2.1. Source definition toward the detector and associated F8 tally. As the radius of the
aperture is increased, the importance ratios remain the same. It should
The variance reduction techniques were applied to a single basic be noted that a user may split the geometry into smaller portions and
introduce additional importance ratios to increase or maintain greater
geometry, containing a source representing an irradiated fuel pin, a
control of particle count; however, the user will need to ensure that
pencil beam tungsten collimator, and a detector, as seen in Fig. 1.
the population of particles remains relatively constant throughout the
The source term utilized in the MCNP variance reduction simulations
split region. Five evenly spaced cells, each with a width of 3.05 cm,
was created using Oak Ridge Isotope Generator (ORIGEN), within were created with importance ratios of 2 to maintain a relatively
the SCALE package [22]. A gamma spectrum of spent nuclear fuel, constant number of particles throughout the transport problem. If the
dependent on the burnup, was generated and implemented. From the collimator aspect ratio increases, then the user may consider increasing
ORIGEN output, the user may acquire gamma emission intensities, and the importance ratios.
upon normalizing the spectrum, emission probabilities for each energy
bin. The gamma spectrum generated from the fuel in ORIGEN was 2.3. Source biasing
divided into 3000 energy bins from 0 to 20 MeV with a width of
6.67 keV, and the gamma source energies and probabilities of emission The analytically-corrected, directionally source-biased technique
were implemented within the MCNP source definition. The F8 tally, developed in Kilby et al. is compared to the global variance reduction
however, utilized 1024 energy bins with an energy width of 2 keV. techniques within MCNP. Source biasing, as with all modified sampling
An in-depth description of how this source term was generated and techniques, will tend to overestimate the count rates as the source
photons are forced into the collimator aperture, but corrections are
implemented is found in Kilby et al.; details of the source description,
required to compensate for this effect. Therefore, this technique relies
however, are not particularly critical in the following analysis [2]. The
on two parts: a sub-volume correction factor and a solid angle correc-
source term may be regarded as an unspecified mixed radionuclide
tion factor. Source points are only defined in the sub-volume of the
multi-photon source. The results and conclusions largely transfer to imaging object within the field of view of the collimator. The region is
single photon sources and single nuclide multi-photon sources. defined as the intersection of the source volume with an extended wall
The purpose of a pencil beam collimator geometry is to sample of the collimator. It is important to note that this study approximates
gamma radiation from a small solid angle; this allows the collection the fuel geometry as a conical frustum as developed in Kilby et al. Due
of high spatial resolution tomographs. The collimator dimensions were to nonzero importance values within the tungsten collimator, umbral
altered across multiple simulations to model the effect of aspect ratio and penumbral effects within the collimator are possible.
variations. However, the width and height of the collimator were A solid angle correction factor based on a point source approxima-
extended to 80 cm each as to not allow for large degrees of scattered tion is introduced; this correction is acceptable if and only if the aspect
photons to induce a signal at the detector. Since the length of the ratio of the collimator is high. For the monodirectional source-biased
collimator is fixed by material attenuation constraints at 15.24 cm, the technique, all the source points are sampled from a volume defined
radius is modified to introduce variation in the aspect ratio. Due to the as the intersection of the cylindrical fuel capsule and the cylindrical
aspect ratio being a function of the length of the collimator and the collimator, as seen in Fig. 2. The aforementioned volumetric correction
diameter of the aperture, the decrease in the radius yields a greater factor is defined in Eq. (5),
( 2 ) ( 2 )
effect on the aspect ratio. The radii simulated were 0.3, 0.5, 1, 3, 5 𝑟3 𝑥3 − 𝑟2 𝑥2
𝐹𝑣,𝑐𝑜𝑛𝑒 = (5)
and 10 mm, while the scintillator detector dimensions were constant at 3𝑅2 𝐻
7.6×7.6 cm. Geometry splitting, DXTRAN spheres, and an analog sim- where 𝑟3 is the radius of the conical frustum’s larger base, and 𝑟2 is the
ulation without roulette were compared to an analytically-corrected, radius of the smaller base. 𝑥3 is the distance from 𝑟3 to the center of
source-biased technique developed by Kilby, et al. [2]. Roulette was the collimator aperture, and 𝑥2 is the distance from 𝑟2 to the center of
disabled because of the adverse effect the technique has on the accu- the collimator aperture. 𝑅 is the radius of the source (0.239 cm), and
racy pulse height tallies [23]. Simulation accuracy and computational 𝐻 is the height of the source (15.24 cm). Additionally, as the radius
savings were analyzed to determine which variance reduction schemes of the pencil beam collimator aperture decreases, the effective solid
are preferable as the radius of the collimator changes. As a final note angle decreases as well. This implies that as the aspect ratio increases,
to the reader, the purpose of this work is to provide a foundation to the point-source approximation for the calculation of the solid angle
view variance reduction techniques independently. Combining variance correction factor becomes more accurate. The factor is shown in Eq. (6),
reduction techniques may provide smaller tally variance, yet in many
cases it may prove to be a more optimized use of time to treat the 𝑟21
𝐹𝛺 = (6)
problem by applying a simple technique. It should also be noted that 4𝐿2
these techniques are compared for problems involving a pencil beam where 𝐿 is the length from the center of the source to the detector, and
collimator and gamma radiation transport. While this is an important 𝑟1 is the radius of the collimator aperture as seen in Fig. 2.
class of transport problems encountered in radiation imaging, the con- To complete the correction for the departure from isotropy, the
clusions presented here do not necessarily extend to other radiation correction factors must be multiplied together with the total activity
transport problems. of the source, which is acquired through ORIGEN. This can be seen in
Eq. (7),
2.2. Geometry splitting/roulette Corrected Activity ≅ 𝐴𝑆𝑜𝑢𝑟𝑐𝑒 (𝐹𝑣 × 𝐹𝛺 ) (7)
where 𝐴𝑆𝑜𝑢𝑟𝑐𝑒 is the activity of the source. 𝐹𝑣 is the correction factor
For geometry splitting and roulette, a geometry must be partitioned
derived from the sub volume defined in Eq. (5), while 𝐹𝛺 is the solid
into smaller pieces for importance manipulation to have a beneficial angle correction factor defined in Eq. (6). The corrected activity is
effect on the transport problem. In the case of high aspect ratio pencil also compared using a volume correction factor, (𝐹𝑣 ) that is obtained
beam collimators, increased particle count is required in the collimator through finite element analysis software CUBIT [24], because Eq. (5)
aperture due to extreme geometric attenuation; therefore, the collima- only is applicable when 𝑟1 is less than R (0.239 cm). CUBIT only
tor was split into 5 equal parts as a rough approximation to maintain a calculates the numerator in Eq. (5). It finds the volume of the conical
relatively constant particle population through the split regions as seen field of view of the collimator intersected with the fuel capsule. The
in Fig. 1. denominator of Eq. (5) is a constant fixed value.

4
S. Kilby, J. Fletcher, Z. Jin et al. Nuclear Inst. and Methods in Physics Research, A 1001 (2021) 165236

Fig. 1. Geometry splitting of the tungsten collimator (not drawn to scale).

Fig. 2. Analytical source biased geometry (not drawn to scale).

2.4. DXTRAN sphere 2.5. Computational savings

The DXTRAN sphere method begins with a sphere around a cell or


To evaluate the computational savings of each variance reduction
group of cells in MCNP. This sphere, while not defined as a physical
technique, one utilizes a figure of merit. In MCNP the figure of merit
cell, alters the way particles interact in the entire MCNP simula-
is defined in Eq. (8),
tion. The DXTRAN sphere causes all particles that interact in any
medium, outside the sphere to split into two components, DXTRAN 1
𝐹 𝑂𝑀 = (8)
and non-DXTRAN. As discussed previously, the DXTRAN particles will 𝑃 2𝑇
move toward the user-defined sphere, while the non-DXTRAN particle where 𝑃 2 is proportional to the inverse of the particle histories, and
will continue its Markovian walk until it reaches the DXTRAN sphere
T is the simulation time, proportional to the number of particle his-
boundary or is absorbed. The DXTRAN particle will always transport
tories [25]. A large figure of merit over the course of the simulation
toward the sphere volume, wherein it has the weight defined by Eq. (5),
is desirable. While this is an indicator of the efficacy of a simulation,
but zero weight everywhere outside the DXTRAN sphere region. This
the authors wish to utilize a figure of merit that considers the particle
principle ensures that the scattered particle will reach the desired tally
history counts and the respective variances. This allows for a broader
region regardless of strong attenuation or low sampling efficiencies due
to small solid angles. For high aspect ratio pencil beam collimators, time-independent comparison that can be extended to operation with
DXTRAN spheres can be used to account for particles that miss the different computational resources, such as high-powered computing
detector tally due to small angle scattering. DXTRAN spheres can clusters. Ratios of this figure of merit method are used to evaluate the
further increase the likelihood of a particle impinging on a tally by computational savings between variance reduction techniques.
using an inner radius. If an inner radius is specified, the likelihood In a tally bound by Poisson statistics, the mean (𝜇𝑚 ) is the number
of a particle being sampled is increased by a factor of 5. For this of counts (𝑐𝑚 ) within the tally bin; it is proportional to the number
simulation, a DXTRAN sphere was added to the detector tally in the of particles utilized in the simulation. In a Poisson process, the mean
MCNP simulation as seen in Fig. 3. The DXTRAN sphere is defined is equal to the variance. However, to incorporate the volumetric cor-
with an inner radius of 8.52 cm and an outer radius of 11.76 cm. It is rection factors of the analytical source biased technique or the other
important to note that scattering toward the inner radius is five times variance reduction methods, a factor 𝑓𝑚 is needed, where m is the
likelier than toward the outer. The collimator aperture is increased method or specific geometry used. This is shown in Eq. (9).
to determine computational savings and accuracy with changes to the
aspect ratio. 𝜇𝑚 = 𝑓𝑚 𝑁𝑃 𝑆𝑚 = 𝑐𝑚 (9)

5
S. Kilby, J. Fletcher, Z. Jin et al. Nuclear Inst. and Methods in Physics Research, A 1001 (2021) 165236

Fig. 3. Geometry with a DXTRAN sphere around the detector cell tally region.

By substituting variance (𝜎𝑚2 ) in for the mean (𝜇), Eq. (10) results. Perhaps as anticipated, geometry splitting emerges as the most
accurate of the techniques for all collimator sizes. The accuracy is
𝜎𝑚2
𝑓m = (10) resistant to changes in collimator aperture radius. This means that some
𝑁𝑃 𝑆𝑚 of the weaker photopeaks are not adversely impacted through using this
To incorporate the fractional error associated with the MCNP tally bins, technique. It is important to note that a comparison of the Compton
the fractional error (𝜖𝑚 ) can be expressed as a function of counts (𝑐𝑚 ) region was not considered in quantitative analysis, as the ability for
and standard deviation (𝜎𝑚 ), as shown in Eqs. (11)–(12). the analog simulation to fully resolve the region is hindered by high
geometric attenuation, which is exacerbated by decreasing collimator
𝜎𝑚 1 1
𝜖𝑚 = = √ = √ (11) apertures. A significant increase in particle histories would be necessary
𝑐𝑚 𝑐𝑚 𝑓𝑚 𝑁𝑃 𝑆𝑚∗ to achieve a result with lower relative errors. The increase in the ratios
1 at 0.802 and 1.60 MeV can be explained by statistical variations, and
𝑁𝑃 𝑆𝑚∗ = (12)
𝑓𝑚 𝜖𝑚2 the photopeak count ratios with respect to the analog case do fall within
the region for agreement, i.e. values close to 1.
For a given fractional tally bin error (𝜖𝑚 ) that is the same for both
The conical frustum approximation overestimates count rates across
cases, NPS∗1 is the minimum number of particles needed to achieve the
all aperture sizes tested and is thus considered the least accurate
fractional error of the tally for method 1 (in this case the monodirec- method. Note that for this comparison, two additional radii were
tional case). NPS∗2 is the minimum number of histories for method 2 (in examined to better define the region of applicability for this tech-
this case the isotropic case). Computational savings (S) is then defined nique. Additionally, for the 0.3 and 0.5 mm cases an isotropic run
in Eq. (13), was infeasible due to the needed number of particle histories required
𝑁𝑃 𝑆2∗ 2
𝑁𝑃 𝑆𝑚,2 𝜎𝑚,1 for statistical convergence (greater than 1E12 particle histories), so
𝑆= = (13) a DXTRAN sphere was implemented to acquire data for these two
𝑁𝑃 𝑆1∗ 2
𝑁𝑃 𝑆𝑚,1 𝜎𝑚,2 data sets. However, to analyze only the photopeaks that were well
converged that is, with relative bin errors less than 10% the 1.6 MeV
where NPS is user-defined in each MCNP input file, and 𝜎𝑚2 is acquired
photopeak was discarded as the bin errors were, in some instances,
from the MCNP output file for a given NPS. In this relationship, if S >
greater than 50%. The conical frustrum approximation is examined in
1, then the simulation results in net computational savings. If S < 1,
two ways. One approach is to apply the analytical volumetric correction
then the simulation results in increased computational expenses. factor that is outlined in this manuscript over all collimator radii. The
other approach is to use finite element analysis programs to obtain
3. Results a more accurate representation of the conical volumetric correction
factor defined in Eq. (5). In both cases, the point source approximation
3.1. Variance reduction comparisons of the solid angle is utilized. A weighted average is applied to the
photopeaks to obtain one weighted value for each of the collimator
When examining the efficacy of a variance reduction technique, it is aperture radii. These weighted averages are shown in Fig. 4.
necessary to benchmark the technique against an analog simulation. In As is to be expected, utilizing a program such as CUBIT to provide
this case, as accurate photopeak data are desired within a reasonable an accurate estimation of the volume improves the accuracy of the
simulation time, each technique is compared at 7 strong photopeaks methodology when the radius of the collimator aperture exceeds the
energies: 0.500, 0.608, 0.669, 0.761, 0.768, 0.800, and 1.60 MeV. radius of the fuel capsule. However, when the R is greater than 𝑟1 , the
analytic approximation for the volume correction factor is applicable.
These photopeaks were determined to be of importance from the
To highlight this effect, at 10 mm aperture radius the analytical volume
ORIGEN simulations that formulated the emission data for the transport
factor provides a ratio that overestimates by a factor of 7.38. The
calculations. In order to calculate the photopeak accuracy of a variance
ratio of the empirical volume correction factor case yields a value of
reduction technique, the bin counts in the variance-reduced F8 tallies
1.08. However, at 1 mm aperture size the weighted average analytical
were divided by the bin values of the analog F8 tally. These ratios ratio is 1.19 and the empirical ratio is 1.16. Therefore, the region of
can be seen in Table 1, and error bars were calculated using error applicability for Eq. (5) is when the radius of the fuel capsule is greater
propagation by division. The error bars can be seen in Table 2. than the radius of the collimator aperture. It should be reiterated
The DXTRAN values are accurate within the error bars until the that the only difference between the empirical and analytical cases
1 mm case, where some variance is noted. The 0.802 MeV peak and is the estimation of the volume correction factor (𝐹𝑣 ). The overall
the 1.6 MeV peak differ from the analog case the most, but these could methodology is the same for both instances, that is utilizing Eq. (7) to
result from random error or through better sampling of the photons in calculate the corrected activity, and the overall methodology displays
that bin. Given the uncertainties, the results are still within agreement. a degree of accuracy over the collimator aperture radii modeled.

6
S. Kilby, J. Fletcher, Z. Jin et al. Nuclear Inst. and Methods in Physics Research, A 1001 (2021) 165236

Table 1
Variance-reduced-to-analog photopeak ratios for geometry splitting and DXTRAN as a function of collimator aperture radius.
Energy (MeV) Geometry splitting DXTRAN
1 mm 3 mm 5 mm 10 mm 1 mm 3 mm 5 mm 10 mm
0.500 1.04 0.99 0.99 1.00 0.98 0.99 0.98 0.99
0.608 1.09 0.98 0.99 0.99 0.98 0.98 0.99 1.00
0.669 0.97 1.00 0.99 1.00 0.94 0.97 0.98 1.01
0.761 0.97 1.01 1.00 1.00 0.95 1.00 1.00 1.00
0.768 0.99 1.00 1.00 1.00 0.98 0.99 0.99 1.00
0.802 0.89 0.95 0.98 1.01 0.89 0.94 0.98 0.99
1.600 1.16 1.00 0.99 1.00 1.40 0.97 0.99 1.00

Table 2
Error bars for geometry splitting and DXTRAN variance reduction techniques as a function of collimator aperture radius.
Energy (MeV) Geometry splitting DXTRAN
1 mm 3 mm 5 mm 10 mm 1 mm 3 mm 5 mm 10 mm
0.500 6×10−2 1.9×10−2 9×10−3 4×10−3 4×10−2 1.7×10−2 8×10−3 3×10−3
0.608 1.3×10−1 4×10−2 2×10−2 9×10−3 9×10−2 4×10−2 1.9×10−2 7×10−3
0.669 1×10−2 4×10−2 1.7×10−2 7×10−3 7×10−2 3×10−2 1.6×10−2 6×10−3
0.761 4×10−2 1.7×10−2 8×10−3 3×10−3 3×10−2 1.6×10−2 7×10−3 3×10−3
0.768 3×10−2 1.0×10−2 5×10−3 2×10−3 2×10−2 9×10−3 4×10−3 1.6×10−3
0.802 1.1×10−1 4×10−2 2×10−2 9.8×10−3 8×10−2 4×10−2 2×10−2 8×10−3
1.600 1.1×10−1 4×10−2 1.9×10−2 8×10−3 3×10−1 4×10−2 1.8×10−2 7×10−3

Fig. 4. Weighted averages and associated error bars of photopeaks at varying col-
limator aperture radii (0.3–10 mm) for the corrected activity using both the purely
analytical approximation and the empirical calculation for the volumetric correction Fig. 5. Computational savings for the seven strongest photopeaks, as a function of
factor. The dotted line represents a ratio of 1. collimator aperture radius. Each of the seven strongest photopeaks are shown in each
cluster.

3.2. Computational savings


photopeak at a given collimator aperture radius, these methods have
no positive impact on the ability of the computer to arrive at a given
The computational savings from all the applied variance reduction
fractional tally bin error per given particle. Note that the 0.3 and
techniques at each of the strongest intensity photopeaks are shown as
0.5 mm cases are not displayed, because a standard isotropic run to
a function of collimator radius in Fig. 4. compare would result in trillions of particle histories needed to obtain
From Fig. 4, the monodirectional bias case yields the greatest com- statistically acceptable results.
putational savings, between 108 and 1013 depending on the aperture Upon closer inspection, there is no discernible benefit to the DX-
size. In contrast, the DXTRAN and geometry splitting techniques vary TRAN and geometry splitting methods solely by measure of the pho-
in terms of computational savings and cost between 1 and 10 mm. topeak regions for the 3 mm and 5 mm collimator aperture. For the
Both methods have noticeable savings compared to the isotropic case geometry splitting method, the average savings are approximately 0.2
for the 1 mm and 10 mm collimator apertures. The DXTRAN sphere and 0.07 for the 3 mm and 5 mm collimator aperture radii respectively,
method has a savings of approximately 2.3 and 1.0 at 1 mm and 10 mm implying that there are no computational advantages to the geometry
respectively. The geometry splitting method has a savings of 2.9 and splitting method solely to examine photopeaks in this type of prob-
1.4 at 1 mm and 10 mm respectively. This means that the variance lem. The DXTRAN method, meanwhile, yields an average savings of
reduction techniques at those collimator apertures do have a positive approximately 0.24 and 0.13 for the 3 mm and 5 mm radii respectively.
impact on the ability of the computer to arrive at a given fractional For the purpose of measuring photopeak counts, both techniques offer
tally bin error per given particle. However, at 3 mm and 5 mm, both the little reduction to computational cost at these collimator aperture radii.
DXTRAN and geometry splitting techniques are more computationally While it is difficult to fully resolve the Compton region in the analog
expensive than the isotropic analog case. This means that for a given case, it is possible to compare savings across the entire spectrum as a

7
S. Kilby, J. Fletcher, Z. Jin et al. Nuclear Inst. and Methods in Physics Research, A 1001 (2021) 165236

conical volume correction factor from 3 to 10 mm. The ratio using these
empirical conical volume factors ranges from 1.06 to 1.08. From 0.3
to 1 mm, the volumetric correction that is calculated using Eq. (5) is
nearly identical to that calculated by CUBIT. For example, the ratios for
the analytic volume correction factor were 1.09 to 1.19, and the ratios
calculated using the empirical volume correction factor were 1.16 to
1.19. In short, to utilize the methodology outlined in this manuscript,
all that is required is an accurate estimation of the volume correction
factor, either through purely analytical means or through empirical
acquisition, and a point approximation of the solid angle. These two
components when multiplied together yield a weight correction scheme
that is accurate over a wide range of collimator aperture radii.
With regard to computational savings, the corrected monodirec-
tional source-biased technique provided the greatest computational
savings over all of the collimator aperture radii. Compared to the fully
analog simulations, DXTRAN spheres and geometry splitting were actu-
ally more computationally expensive for the 3 mm and 5 mm collimator
aperture radii, indicating that computational resources are better spent
on an analog simulation than to use the techniques when examining
only the photopeak regions in the F8 tally. Considering the entire
tally, the DXTRAN sphere sees a pronounced increase over the analog
Fig. 6. Computational savings for the total tally bin at varying collimator aperture case in computational savings. Compared with the geometry splitting
radii. technique, the DXTRAN method performs better over the entire F8 tally
spectrum at each of the collimator aperture radii simulated.
When choosing a variance reduction technique to utilize for high
function of radius to gauge each technique’s effectiveness across the aspect ratio pencil beam collimator problems, it is important to un-
entire tally. These savings are visualized in Fig. 6. derstand which component of the spectrum needs to be analyzed. The
When considering total tally bins, the variance reduction techniques geometry splitting method provides the least variation in accuracy and
do provide an appreciable increase in computational savings. For the seems better poised to outperform the DXTRAN method for predicting
DXTRAN method, a savings factor of 23, 0.31, 0.76, and 3.44 is photopeak intensities. The DXTRAN method should be used when accu-
achieved for 1, 3, 5, and 10 mm respectively, which is a significant racy and greater computational savings are desired over the entirety of
improvement over the geometry splitting method, which has a savings the F8 tally spectrum (including the Compton background). Ultimately,
of 4.2, 0.09, 0.27, and 2.1 at 1, 3, 5, and 10 mm respectively. The the monodirectional source biasing technique is vastly more efficient
DXTRAN method generally outperforms the geometry splitting method than DXTRAN and geometry splitting but will only provide quantitative
throughout each of the collimator aperture radii simulated. Compar- results when the aspect ratio of the collimator is large.
ing Figs. 5 and 6, one can conclude that tally bins associated with
photopeaks converge more slowly using the geometry splitting with a CRediT authorship contribution statement
slightly higher degree of accuracy than the other methods, while the
background and Compton continuum converge more rapidly than in Seth Kilby: Conceptualization, Methodology, Software, Writing -
the isotopic analog case. Thus, when modeling spectral background, the original draft. Jack Fletcher: Software, Visualization. Zhongmin Jin:
DXTRAN method has advantages over analog simulation and geometry Software, Writing - review and editing. Ashish Avachat: Writing -
splitting. The source-biased technique, as in the photopeak regions, review and editing. Devin Imholte: Writing - review and editing,
significantly outperforms the other variance reduction schemes with Project administration. Nicolas Woolstenhulme: Writing - review and
the caveat that its accuracy is only tolerable for high aspect ratio editing, Project administration. Hyoung K. Lee: Writing - review and
collimators. editing, Project administration. Joseph Graham: Writing - review and
editing, Conceptualization, Supervision, Project administration.
4. Conclusions
Declaration of competing interest
The speed and accuracy of three variance reduction techniques
for accelerating radiation transport problems involving transport of The authors declare that they have no known competing finan-
gamma rays through narrow pencil beam collimators was compared. cial interests or personal relationships that could have appeared to
Geometry splitting emerged as the most accurate by providing pho- influence the work reported in this paper.
topeak ratios nearly identical to those predicted from fully analog
simulations. The DXTRAN sphere provided accurate photopeak tallies Acknowledgments
except at 1 mm collimator aperture radius. This is most likely attributed
to the fact weaker 1.6 MeV peak is being better sampled, compared This material is based upon work supported by the U.S. Department
with isotropic simulation, thus increasing the counts. In contrast, the of Energy, Nuclear Energy University Programs, project 17-13011,
corrected monodirectional source biasing technique exhibited the least and by the U.S. Nuclear Regulatory Commission, Nuclear Education
accuracy. A viable method to fix the accuracy of this method when Program under award NRC-HQ-13-G-38-0026. The authors would also
the radius of the fuel capsule is less than the radius of the collimator like to extend gratitude to Dr. Kyle Paaren for his software license and
aperture, is to utilize a better estimate of the true volume of the inter- assistance procuring volumetric intersection data from CUBIT.
section between the cone created by the collimator field of view and the
cylindrical fuel capsule. However, this is probably easier to calculate Appendix
using CUBIT or SOLIDWORKS, because the intersected volume is not
a trivial calculation. By utilizing one of these programs, the accuracy See Tables A.1 and A.2
of the technique is increased over the analytical approximation of the

8
S. Kilby, J. Fletcher, Z. Jin et al. Nuclear Inst. and Methods in Physics Research, A 1001 (2021) 165236

Table A.1
Variance reduction cases to analog photopeak count ratios.
Energy Analytical cone approximation Empirical cone approximation
(MeV)
0.3 mm 0.5 mm 1 mm 3 mm 5 mm 10 mm 0.3 mm 0.5 mm 1 mm 3 mm 5 mm 10 mm
0.500 1.270 1.327 1.316 2.169 3.532 7.122 1.263 1.297 1.243 1.083 1.080 1.100
0.608 1.085 1.212 1.258 2.056 3.349 6.654 1.080 1.194 1.186 1.027 1.024 1.028
0.669 1.124 1.000 1.153 2.100 3.339 6.762 1.116 0.939 1.089 1.049 1.021 1.045
0.761 0.959 1.245 1.204 2.142 3.478 6.993 0.954 1.061 1.136 1.070 1.063 1.080
0.768 1.165 1.246 1.237 2.140 3.459 7.031 1.159 1.220 1.167 1.069 1.058 1.086
0.802 0.978 1.146 0.996 2.015 3.350 6.901 0.973 1.146 0.942 1.006 1.025 1.066

Table A.2
Photopeak count ratio error (𝛿R).
Energy (MeV) Analytical cone approximation Empirical cone approximation
0.3 mm 0.5 mm 1 mm 3 mm 5 mm 10 mm 0.3 mm 0.5 mm 1 mm 3 mm 5 mm 10 mm
0.500 5×10−2 3×10−2 5×10−2 4×10−2 3×10−2 2×10−2 5×10−2 2×10−2 4×10−2 2×10−2 8×10−3 3×10−3
0.608 1×10−1 6×10−2 1×10−1 8×10−2 6×10−2 4×10−2 1×10−1 6×10−2 1×10−1 4×10−2 2×10−2 6×10−3
0.669 1×10−1 2×10−1 8×10−2 7×10−2 5×10−2 4×10−2 1×10−1 2×10−1 7×10−2 4×10−2 2×10−2 6×10−3
0.761 2×10−1 2×10−2 4×10−2 3×10−2 2×10−2 2×10−2 2×10−1 2×10−2 3×10−2 2×10−2 7×10−3 3×10−3
0.768 8×10−2 1×10−2 2×10−2 2×10−2 1×10−2 1×10−2 8×10−2 1×10−2 2×10−2 1×10−2 5×10−3 2×10−3
0.802 1×10−1 6×10−2 8×10−2 9×10−2 7×10−2 5×10−2 1×10−1 6×10−2 7×10−2 4×10−2 2×10−2 8×10−3

References [11] P. Andersson, S. Holcombe, T. Tverberg, Inspection of a LOCA test rod at the
halden reactor project using gamma emission tomography, in: Top Fuel 2016:
[1] I. Kawrakow, M. Fippel, Investigation of variance reduction techniques for LWR Fuels with Enhanced Safety and Performance, 2016.
Monte Carlo photon dose calculation using XVMC, Phys. Med. Biol. 45 (2000) [12] S. Jacobsson, A. Bäcklin, A. Håkansson, P. Jansson, A tomographic method for
2163–2183. experimental verification of the integrity of spent nuclear fuel, Appl. Radiat. Isot.
[2] S. Kilby, et al., A source biasing and variance reduction technique for 53 (4–5) (2000) 681–689.
Monte Carlo radiation transport modeling of emission tomography problems, J. [13] M.C. Team, MCNP-A General Monte Carlo N-Particle Transport Code, Version 5
Radioanal. Nucl. Chem. (2019). Volume I: Overview and Theory X-5 Monte Carlo Team, 2003.
[3] H.M.O.D. Parker, M.J. Joyce, The use of ionising radiation to image nuclear fuel: [14] J.T. Goorley, et al., MCNP6 TM USER’S MANUAL Alamos National Laboratory
A review, Prog. Nucl. Energy 85 (2015) 297–318. 2 NEN-5 Systems Design and Analysis, Los Alamos National Laboratory 3 XCP-7
[4] A. Anastasiadis, A Monte Carlo Simulation Study of Collimators for a Transport Applications, Los Alamos National Laboratory 4 NEN-5 Systems Design
High-Spatial-Resolution Gamma Emission Tomography Instrument, Uppsala and Analysis, Los Alamos National Laboratory, contr, 2013.
Universitet, 2019. [15] J.E. Morel, Title: Deterministic Transport Methods and Codes at Los Alamos
[5] T. Lundqvist, S. Jacobsson Svärd, A. Håkansson, SPECT Imaging as a tool to Deterministic Transport Methods and Codes at Los Alamos.
prevent proliferation of nuclear weapons, Nucl. Instrum. Methods Phys. Res. Sect. [16] T.E. Booth, Common Misconceptions in Monte Carlo Particle Transport, 2011.
A Accel. Spectrometers, Detect. Assoc. Equip. 580 (2) (2007) 843–847. [17] S.W. Mosher, et al., ADVANTG-An Automated Variance Reduction Parameter
[6] P. Andersson, S. Holcombe, A computerized method (UPPREC) for quantitative Generator, 2015.
analysis of irradiated nuclear fuel assemblies with gamma emission tomography [18] A. Davis, A. Turner, Comparison of global variance reduction techniques for
at the Halden reactor, Ann. Nucl. Energy 110 (2017) 88–97. Monte Carlo radiation transport simulations of ITER, Fusion Eng. Des. 86 (9–11)
[7] S. Holcombe, S. Jacobsson Svärd, L. Hallstadius, A Novel gamma emission (2011) 2698–2700.
tomography instrument for enhanced fuel characterization capabilities within the [19] K.J. Kelly, et al., Implicit-capture simulations for quantification of systematic
OECD Halden Reactor Project, Ann. Nucl. Energy 85 (2015) 837–845. uncertainties from experimental environments, Nucl. Inst. Methods Phys. Res. A
[8] S.J. Svärd, A. Håkansson, A. Bäcklin, P. Jansson, O. Osifo, C. Willman, To- (2018).
mography for partial-defect verification experiences from measurements using [20] T.E. Booth, Pulse Height Tally Variance Reduction in MCNP PULSE-HEIGHT
different devices Tomography for partial-defect verification - experiences from TALLY VARIANCE REDUCTION IN MCNP, 2004.
measurements using different devices, 2006. [21] T.E. Booth, K.C. Kelley, S.S. McCready, Monte Carlo Variance reduction using
[9] S. Caruso, M.F. Murphy, F. Jatuff, R. Chawla, Determination of within-rod nested dxtran spheres, Nucl. Technol. 168 (3) (2009) 765–767.
caesium and europium isotopic distributions in high burnup fuel rods through [22] B.T. Rearden, M.A. Jessee, SCALE Code System, 2016.
computerised gamma-ray emission tomography, Nucl. Eng. Des. 239 (7) (2009) [23] J.S. Bull, Validation Testing of Pulse Height Variance Reduction in MCNP, vol.
1220–1228. 836, 2008.
[10] S. Caruso, M.F. Murphy, F. Jatuff, R. Chawla, Nondestructive determination of [24] T. Blacker, et al., CUBIT Geometry and Mesh Generation Toolkit 15.2 User
fresh and spent nuclear fuel rod density distributions through computerised Documentation, 2016.
gamma-ray transmission tomography, J. Nucl. Sci. Technol. 45 (8) (2008) [25] J.K. Shultis, R.E. Faw, AN MCNP PRIMER by, 2004.
828–835.

You might also like