Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Computer Physics

Communications
ELSEVIER

Computer Physics Communications 87 (1995) 54-86

User-configurable MAGIC for electromagnetic PIC calculations


Bruce Goplen, Larry Ludeking, David Smithe, Gary Warren
Mission Research Corporation, 8560 Cinderbed Road, Newington, VA 22122, USA

Received 20 April 1994, revised 12 December 1994

Abstract
MAGIC is a user-configurable code that solves Maxwell's equations together with Lorentz particle motion. A
variety of 2D, finite-difference electromagnetic algorithms and 3D particle-in-cell algorithms may be combined in
problem-specific ways to provide fast, accurate, steady-state and transient calculations for many research and design
needs. Default configurations provide good speed and accuracy for most applications, and a library of templates
offers optimized algorithm configurations for specific devices. A programmable processor named POSTER provides
advanced post-analysis of the field and particle solutions. Coordinate systems, boundary conditions, geometry, and
materials are specified by the user, and grid generation can be manual, user-assisted, or fully automatic. MAGIC has
a fully 3D counterpart called SOS. Programs exist to connect these analysis tools to parametric and CAD input from
an integrated design environment.

1. Introduction
Direct integration of Maxwell's equations is a
general approach for determining the dynamic
behavior of electromagnetic systems. When there
are electrically charged particles present, the relativistic equations of motion based on the Lorentz
force can also be integrated directly to include
the effects of charge and current density. In principle, direct integration of the Maxwell and
Lorentz equations should solve any classical electromagnetics problem. In practice, its use has
been limited by a number of problems, including
the complexity, incompatibility, or unavailability
of essential algorithms, the requirements for detailed geometry, material, boundary, and particle
emission models, the interpretation of voluminous and abstract output, a failure to integrate
with other software in a design environment, and

a shortage of personnel with training in this complex, abstract science.


Such problems have presented a continuing
challenge to the successful use of electromagnetic
PIC, and over the years, many techniques have
been developed to address them. MAGIC is successful because it incorporates many of the most
useful techniques, and allows them to be configured to meet a user's specific needs with a minimum of knowledge and effort. Consequently, it
has been used for research and design in many
fields, including microwave amplifiers, antennas,
sensors, fiber optics, lasers, accelerator components, beam propagation, pulsed power, plasma
switches, microwave plasma heating, ion sources,
field emitter arrays, semiconductor devices, wave
scattering, and coupling analyses. The configurability features which support such a spectrum of
uses are the subject of this article.

B. Goplen et aL /Computer Physics Communications 87 (1995) 54-86

1.1. Overview

MAGIC may be viewed as a tool-box that can


be configured at run time by selecting appropriate geometry, material properties, boundary conditions, field algorithms, particle algorithms, and
output specifications. The degree of detail specified is largely up to the user; default configurations provide good accuracy in reasonable time
for the novice who does not wish to or may not
know how to specify some aspects of an electromagnetic PIC simulation.
The basic electromagnetic computational processing cycle is shown in Fig. 1. Configurability is,
in part, the ability to select algorithms independently for each step and, in part, the ability to
add steps to the process. The fields in Maxwell's
equations are represented on a finite-difference
grid [1]. From an alternative, engineering viewpoint, the grid creates a circuit element at each
location in space, which is coupled to elements at
neighboring locations. MAGIC can use a variety
of field algorithms with a range of speed, stability, and numerical smoothness; when advantageous, it employs the engineering viewpoint to
provide special models that treat material properties and complex, fine-structure geometric details.
Particles are represented using the particlein-cell (PIC) approach [2], in which a computational particle (hereafter referred to as a macro
particle) represents a large number of physical
particles of the same species (e.g., electrons). A
variety ofalgorithms for emission processes, particle kinematics, and current density allocation
are available, all based upon the same mathematical foundation, but each individually optimized
for different density, velocity, and field strength
regimes.

electromagnetic
fields

material
effects
on fields

particle
kinematics

material
effects
on particles

particle

currents

processing cycle per time step

Fig. 1. The processing cycle .. Many algorithmic options are


available for each step.

55

Material models provide accurate treatment of


the complexities of real systems and can include
both field and particle effects. Some quantum
effects are modeled using phenomenological
models. Material models are designed to be compatible with all field and particle algorithm combinations.
Boundary conditions connect the domain of
the simulation with the spatial regions beyond.
These algorithms are based on the fundamental
response of Maxwell's equations under the specification of Neumann, Dirichlet, and other symmetries, as well as other, more complicated conditions.
A representative list of MAGIC's capabilities
is shown in Table 1. Section 2 presents the strategies for implementation of configurability, and
Section 3 provides the necessary mathematical
foundations. Many of the configurable elements
listed in Table 1 are then discussed in Sections 4
through 8.
1.2. History

The history of the MAGIC family of codes


begins in 1978. MAGIC was created to model
magnetic insulation, from which it derives its
name [3]. A three-dimensional version, called
SOS, was created to treat system-generated, electromagnetic pulse effects on satellites [4] and was
subsequently applied in pulsed power applications [5]. The name, SOS, stands for self-optimized sector, referring to the automated memory
management system used. Both codes were written in FORTRAN for use as research tools by
computational physicists on super-computers.
Their subsequent development has been greatly
influenced by increasing diversity in applications
and users.
New algorithms have always been added in a
general way to maximize usage and to maintain
the compatibility of MAGIC and SOS. Algorithms are generally developed first in MAGIC
and then, once proven, are incorporated into
SOS. Their architecture has been updated to
improve modular structure and to incorporate a
reusable software library. The graphics have been
periodically updated and now allow link-time se-

56

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

Table 1
Configurable elements of MAGIC
Control language
Variable definitions
Function definitions
Conditionals
Loops
Macros
Self-documentation
Restart
Error handling

Grid
Uniform grid
Manual grid
Appended regions
Polynomial smoothly varying grid
Pade smoothly varying grid
Materials
Resistive
Dielectric
Perfect conductors
Scattering foil
Polarizer sheet
Helix element
General current sources
Air chemistry
Semiconductor

Geometry
Cartesian coordinates
Polar coordinates
Cylindrical coordinates
Spherical coordinates
Mirror symmetry boundary
Periodic symmetry boundary
Absorbing boundary
Outgoing wave boundary
Phase-specific outgoing wave boundary
Ideal transmission line sections
Non-ideal transmission line sections
Applied voltage boundary
External circuit voltage source
Particle and field import

Field algorithms
Standard leapfrog
Time-reversible leapfrog
Semi-implicit
Standard noise filtering
High-Q noise filtering
Quasistatic
Electrostatic ADI
Electrostatic SOR
Externally specified magnet field
Restricted TE or TM modes

Particle algorithms
2D or 30 forces
Area restricted forces
Spatial averaged forces
Temporal averaged forces
Restricted TE or TM forces
Fixed analytical forces
Applied magnetic forces
Charge-conserving currents
Fast non-conserving currents
Space-charge error-diffusion currents
Thermal cooling currents
Gyro--orbit currents
Preionization
Multiple species
Surface plasma emission
Field emission creation
Photo emission creation
Thermionic emission creation
Explicit beam creation
Independent particle push interval
Nonrelativistic kinematics
Relativistic kinematics
Predictor-corrector kinematics
Gyro-orbit kinematics
Output
Electromagnetic fields
Particle position and momentum
Energy balance
Particle fluxes
Particle energy
Poynting flux
Particle current distnbution
Particle energy distribution
Snapshot timers
Time-averaged snapshots
Time history
Smoothing
Derivative operations
Integration operations
x-y plots
Contour plots
Perspective plots
Particle trajectories
Tabular data
Boundary export
Vector representations
General transformation options
Fourier spatial decomposition
Fourier temporal decomposition
Particle tagging
Screen drivers
Metafile drivers
Post-processing data format

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

lection of many different platform-dependent


screen drivers and run-time selection of screen or
pos~script output. The codes have been porte9 to
VMS, UNIX, DOS, and MS-Windows environments, the latter being illustrated in Fig. 2.
A user's group was formed in 1989 to facilitate
code dissemination and usage [6]. The group includes university, government laboratory, and
commercial users who are applying the tools in
many different fields. Extensive documentation,
including user's manuals and reference manuals
[7-11], is maintained for the group, which meets
annually to exchange techniques and to identify
key needs for the upcoming year. Users also
communicate via a newsletter (12] dedicated to
that function.

57

In 1990, the POSTER code was released, having evolved from an earlier, special-purpose
post-processor. POSTER is now a programmable
engineering spreadsheet of gei).eral applicability,
which retains special analysis- features appropriate for electromagnetic PIC. 'It also facilitates
interconnecting electromagnetic PIC tools with
each other and with other modeling tools. To this
end, a data storage format was created, and code
was added to POSTER, MAGIC, and SOS to
input and output data in the new format. New
post-analysis features continue tq be added to
POSTER.
Parallel processing has also been investigated.
In 1991, a parallel version of MAGIC was released by Leabrook Computing.

Coltec toe

Fig. 2. A particle simulation under MS-Windows. MAGIC has been ported to virtually all platforms and operating environments.

58

B. Coplen et al/ Computer Physics Communicati.ons 87 (1995) 54-86

Graphically-driven, preprocessor interfaces


have also been constructed. In 1992, an interface
based on the Apple HyperCard software platform
was built to provide general control language
input to MAGIC. In 1993, an interface based on
the Microsoft Excel spreadsheet was built for a
specific class of microwave amplifiers. Work on
more advanced GUI concepts is in progress.
MAGIC has been selected for use in the Microwave and Millimeter Wave Advanced Computational Environment (MMACE) [13], thus providing increased impetus to connect it to other
simulation tools as well as to CAD tools. MMACE
is currently under development by a consortium
of companies with support from DoD.
2. Configurability

Configurability is an approach for rapidly applying the history of algorithmic developments to


a new problem. All the algorithmic options are
available in one tool, and the best algorithms for
any specific problem are selected by issuing appropriate commands. This section discusses some
of the key aspects of configurability, as embodied
in the MAGIC architecture.
2.1. Selectable, mutually compatible algorithms

Throughout the history of electromagnetic PIC,


considerable effort has been expended to develop
algorithms (field, particle, geometry, emission
processes, etc.) which optimally address particular simulation requirements. One result is a
plethora of special-purpose codes. As yet, however, no single set of algorithms has emerged
which is superior in all applications. Therefore,
our vision of a useful code is one which allows the
user to make the appropriate tradeoffs of speed,
accuracy, and stability, and to select that combination of algorithms which best meets his requirements. Selectability implies mutual compatibility - that is, all of the options for field algorithm, particle algorithm, boundary conditions,
etc., - must function together, interchangeably.
Several architectural approaches can be taken
to achieve selectability. One is to compile source

code based on specified options to create a custom code. This results in a smaller executable,
but requires a precompiler, eomeiler, linker, and
an automated compilation and linking system to
be available for every new application. Another
approach is to provide an executable for each
algorithm and to exchange data via shared memory. However, this requires multiple executables
and suffers from shared memory and decomposition difficulties in distributed or parallel processing.
The architectural approach used in MAGIC is
to create a single executable for solving fields and
particles in 2D and another in 30, with separate
executables for a GUI and for post-analysis. This
multi-tasking approach provides flexibility, while
collecting into one executable all computationally
intensive algorithms which exchange data at maximum speed. All algorithm options access the
same data; they simply update it using different
calculations and adjustments. Mutual compatibility of algorithms is largely the result of rigorous
adherence to a common mathematical foundation, as discussed in Section 3.
2.2. Command langua.ge

MAGIC provides configuration control


through a command language, which enables the
user to choose algorithms and other aspects of his
simulation. Certain features of the command language, such as variables, do-loops, and logical-if
tests, support the creation of reusable configurations, which we refer to as templates. In essence,
a template is an expert system which can be
applied to new applications by simple parameter
variation, thus providing efficiency in the design
environment. Templates also provide a library
and history of useful configurations. This concept
is discussed further in Section 8.4.
2.3. Connectivity

Versatility in a code's connectivity and programmability are key to successfully configuring it


into a larger environment, which is highly desirable for use in practical, concurrent design. The
command language enables easy connection of

B. Gop/en et al /Computer Physics Communications 87 (1995) 54-86

external GUis to the code and also permits user


control of sequencing for the GUI, the simulation, and the post-processor. The use of multitasking also facilitates connectivity. MAGIC has
been connected to Excel (a spread-sheet GUI)
and to several CAD interfaces, including AutoCad, Pro-engineer and CANVAS (via IGES).
Features of the command language allow MAGIC
to be connected to virtually any interface.
2. 4. Portability and performance

Portability and performance are determined


largely by hardware, operating system, graphics,
and windowing facilities. Five years ago, supercomputers were the workhorses of PIC calculations. Now, PCs handle a substantial fraction of
the load [14-16]. The trend is to X-Windows and
other windows-type environments, more color,
and 30 graphical display.
The portability of MAGIC is facilitated by
software layers which isolate algorithms from the
environment. The bottom layer isolates the operating system. Above that are layers that isolate
the graphics drivers and machine precision. These
layers have been added by retrofitting, and a
process has been developed for isolating additional features, if necessary. The isolating layers
constitute a library separate from any of the
codes; once the library is ported, the codes themselves port easily.
Table 2
Performance of MAGIC on competing platforms
Platform

Time (m:sec)

Y-MP

PARAMID (i860-XPs)
8 nodes
'tt nodes
1860-XP
HP755
IBM RS6000 /560

SILICON GRAPHICS
SUN SPARC 10
VACCELERATOR
486 PC 66
486 PC 66
486 PC 50
486PC25

MHz (256K cache)


MHz (64K cache)
MHz (Notebook)
MHz

59

Table 2 illustrates the performance of MAGIC


in a single-user, dedicated mode for a nominal
test problem on a variety of platforms. The range
of performance across all the platforms spans
about one decade. Actual turnaround times on
larger platforms may increase because of time
sharing.
2.5. Extensibility

Extensibility is a measure of the viability of


software for continued growth - .a measure of
how much more can be added to a code before it
collapses under the weight of its own internal
complexity. A good architecture can allow far
more options and controls for configuration.
While primarily of concern to code developers,
extensibility also provides assurance to users that
the software will continue to support new advances and not become outmoded.
User extension is available through a built-in
compiler for processing mathematical expressions. This allows the user to specify functional
dependencies at run-time for any number of features. An internal compiler reads the function,
compiles it, and uses it at appropriate locations in
the processing. When this flexibility is insufficient, then there are explicit hooks where source
code can be added. Default, dummy routines are
provided, and they may simply be replaced with
new routines that provide the special functionality. The executable must be re-linked in such
circumstances.

3. Mathematical foundations

2:22
1:54
2:42
8:54
4:05
8:52
7:02
10:01
21:58
21:11
22:50
25:09
49:19

Maxwell's equations and the relativistic


Lorentz equation are sufficiently general that they
are virtually never called into question in the
electromagnetic PIC approach. However, since
most physical problems involve fields defined
continuously over space and an astronomical
number of particles, the challenge is to achieve
good solutions with finite computational resources. Conditions for the overall validity of the
electromagnetic PIC approach have been thoroughly documented elsewhere [2]. This section

60

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

discusses the mathematical operations of the


MAGIC code, focusing upon implementation and
practical constraints.
forces

3.1. Physical basis

MAGIC performs a time integration of Faraday's law, Ampere's law, and the particle force
equation,
otB= -V XE,
01 E

F -+--...;....--@l,...__ _ _1 - _

~cles<: -+---...;...--+~-~:
__,,,__
currents

-'"---.+--....i..-~~-----

n+'l2

n+'t2

Fig. 3. Leapfrog time integration scheme. This is the most


basic scheme available in MAGIC.

= -J/e + (e)- V XB,

01 pt=FJm,, Ft=q,[E(x,) +v;XB(x;)],

(3.1)

subject to constraints provided by Gauss's law


and the corresponding rule for the divergence of
B,

VE=p/e,

VB=O.

(3.2)

In these equations, E(x) and B(x) are the electric and magnetic fields, xi and P; are the position and momentum of the ith charged particle,
and J(x) and p(x) are the current density and
charge density resulting from those particles.
To perform the time integration, known values
of the variables are used to compute time derivatives which are used to advance the variables in
time. The constraints are equivalent to an expression of charge continuity on the electric and
magnetic fields which must be satisfied as an
initial condition and at all later times.
This time integration is usually referred to as a
time-domain solution. In data management terms,
the old values of variables are stored in memory
and are simply overwritten as the new values are
calculated.
3.2. Discrete time

later than the previous value. Because of the


natural complementary roles of the field and
particle variables in Eq. (3.1), the leapfrog scheme
illustrated in Fig. 3 is a natural choice. MAGIC
offers this selection, which provides second-order
truncation error, as well as numerous other integration schemes discussed in Sections 4 and 5.
Some options allow different time steps to be
used in different parts (e.g., kinematics) of the
integration. However, the "full-time-step" and
"half-time-step" variable definitions which provide well-centered differencing of Eq. (3.1) are
the basic units of time, and special techniques
ensure the well-centering of the force, current,
and charge terms, as discussed later.

.. E1

-~~:~- ........ .

-~-

ti'-

B3

The time-integration scheme is based upon a


fixed time interval, ot, between variable updates,
such that when the time derivatives of Eq. (3.1)
are approximated in finite difference form, e.g.,
oE /ot ~ (Enew - E01d) /ot, they provide an equation for a new value of the variable at a time, ot,

E1

Fig. 4. Spatial definition of fields. Well-centering provides


second-order accuracy in spatial derivatives.

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

3.3. Discrete space

All of space is divided into regular, orthogonal


cells in one of four coordinate systems (Cartesian,
cylindrical, polar, and spherical). The continuous
field variables, E(x) and B(x), are stored in
computer memory at specific "full-cell" and
"half-cell" locations on the cell, as established by
the traditional second-order, well-centered, differencing scheme illustrated in Fig. 4 [17].
The division of space into cells (grid generation) is relatively simple due to orthogonality. It is
performed as two (or three) separate, one-dimensional discretizations, each specifying full-grid locations along an axis. The full-grid intersections
create the cells. The spacing of full-grid locations
may vary according to resolution requirements
and to conform with the geometry. To maintain
second-order accuracy, the change in adjacent
cell size is typically limited to 25%, and maximum
cell aspect ratio is limited to five.
The level of automation can be selected by the
user. In the manual mode, a uniform or variable
grid is typically specified in sections, where each
section is matched to a key geometric object. In
the fully automated mode, it is sufficient to identify the key geometric objects and specify an
overall resolution. Then the grid generator automatically divides the axis into sections based on
the object locations and creates a variable grid,
subject to the constraint that the cell sizes match
at section boundaries to preserve second-order
accuracy.
3. 4. Divergence and curl operations

The finite-difference divergence and curl operations are derived by interpreting them as averages over a cell face or a cell volume. For example, by Green's theorem, the face-averaged curl
operation becomes
1
1
- f dAVXE=-f dlE
dAk
dAk

=L

d[.

d;_ . E,
k

(3.3)

61

which is a loop sum of the electric-field components and their edge-length elements, dlj, around
the face-area, dAk, associated- with a magneticfield component. The cell-averaged divergence
operation involves the same face area. This formalism provides a reliable derivation of the finite-difference curl and divergence operations in
non-Cartesian coordinates.
The numerical equivalents of the mathematical identities, V (V X A)= 0 and V x (V<f>) = 0,
are also preserved exactly. This ensures that
transverse (curl) and longitudinal (divergence)
fields remain mutually isolated. Thus, time-stepping the fields does not create errors in the
divergence. It also facilitates selective manipulation, i.e., pure filtering of either curl or divergence, as may be desirable for various purposes.
Numerical preservation of the identities also
results in an exact energy-conservation rule corresponding to Poynting's theorem. In the absence
of particles or other sources and sinks, the finitedifference field energy at time step, n, defined as

wn = t LE( E/) 2 dlj dAj


l

+2L

( B/-1;28 /+1;2)

dlk dAk

(3.4)

remains constant for all later time-steps, to machine precision.


3.5. Macro particle representation

The number of physical particles in a problem


almost always exceeds the capacity of any computer memory. Hence, a single macro particle
typically must represent many physical particles,
which have the same mass and charge and are
approximately co-located in phase space. The
representation is satisfactory when there are sufficient macro particles to represent all phase space
regions which contribute to the result. Such a
simulation is said to have good statistics. Physical
phenomena benefiting from reduced volumes and
dimensions of phase space, e.g., charged particle
beams, are treated quite easily with macro particle techniques. Phenomena involving greater expanses of phase space, e.g., a thermalized plasma,
prove to be more demanding.

62

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

Particles and fields interact in two places in


Eq. (3.1): the current density in Ampere's law, J,
and the Lorentz force, Fi. Since the fields are
defined at discrete locations on the grid, each
particle is also mapped to the grid. The particle
charge is allocated to grid points surrounding the
particle; Section 5 describes several algorithms to
achieve this and their effect upon local charge
conservation. The current density, J, basically
reflects the change in allocation of particle charge
during a time step. In a similar manner, the
particle force is computed from the weighted sum
of fields from the grid points.
3.6. Simulation constraints

In general, large cells and time steps are desired to reduce simulation expense. However, the
cell size and time step are constrained by spatial
and temporal resolution requirements for the
physical phenomenon of interest. They may be
further constrained by several numerical instabilities which can arise in plasma simulations.
A catastrophic numerical instability occurs
when the time step, ot' is too large to resolve
light waves of very short wavelength (nearly equal
to the grid spacing). The Courant stability condition is cSt < oxmin' where c is the speed of light
and ox min is the minimum cell dimension. When
it is violated, exponential growth in the fields
destroys simulation validity. The stability condition may be relaxed significantly, but not circumvented, using algorithms discussed in Section 4.
Another catastrophic numerical instability occurs when the time step is too large to resolve the
oscillation of particles at the plasma frequency.
For stable plasma oscillations, the plasma stability condition, wPSt < 2, must be satisfied, where
wP is the maximum plasma frequency of the
charge distribution.
A third instability is a slower, non-linearly
stabilized effect which occurs when the cell dimension is too large to resolve the smallest natural scale phenomenon. In a thermal plasma, for
example, "self-heating" causes growth in the Debye length ( = thermal velocity /plasma frequency) until it saturates, i.e., becomes approximately equal to the cell size. A new "cooling

algorithm," discussed in Section 5, can relax or


slow the onset of this instability. Exact, electromagnetic-plus-kinetic, energy-gmserving algorithms, which may have the potential to stabilize
the small Debye-length regime . absolutely, are
also under investigation.
3. 7. Initial conditions

The default initial condition is one in which all


fields identically vanish and there are no particles. However, arbitrary initial conditions can be
established by presetting fields and populating
the space with particles. In this case, Gauss's law
must be satisfied by solving Poisson's equation
and deriving the electric fields from the gradient
of the scalar potential. (This determines the longitudinal (curl-free) electric field only; the transverse (divergence-free) electric field, which is
proportional to oB jot, is arbitrary.)
Not all initialization problems require an electrostatic solution. It is frequently useful to initialize a charge-neutral plasma in which the ions
have infinite mass. A good example is a plasma
channel used to guide electron beams [18]. If only
electrons are populated and the initial electric
field is zero, then, by Gauss's law, ions are implicitly present. As the simulation progresses, the
plasma electrons move from their initial positions, leaving behind positively charged "holes"
in the space-charge field. The holes remain perfectly intact, and no ion macro particles are required.
Magnetostatic fields are very often important
in electromagnetic systems. During the simulation, static fields are simply added to dynamic
fields for the particle force calculation. MAGIC
can compute magnetostatic fields directly, and it
can also accept analytic results, experimental data,
and output from specialized, magnetic-field codes,
e.g., the POISSON group [19,20]. The simulation
shown in Fig. 5 used a multiple-cusp, magnetostatic field to confine electrons which, in tum,
produced an electrostatic potential well for
spherically convergent ion beams [21]. The magnetostatic field was specified analytically, and an
electrostatic solution to account for the electrons
completed the initial conditions.

B. Coplen et al./Computer Physics Communications 87 (1995) 54-86

Fig. 5. Simulation of the HEPS fusion experiment. Initial


conditions include a multiple-cusp magnetic field and a nonneutral electron plasma.

4. Fields
Three classes of field algorithms are represented: electromagnetic (time-domain), electromagnetic (frequency-domain), and electrostatic.
The primary thrust is to develop algorithms which
work especially well with particles, e.g., to suppress noise or achieve some desired conservation
property which improves simulation fidelity. Electrostatic algorithms find additional use for initial
or boundary conditions in a transient simulation.
4.1. Electromagnetic (time-domain) fields

The time-domain algorithms receive the most


emphasis due to interest in simulating transient,
electromagnetic phenomena. Some general considerations and specific options for transient simulation are discussed below.
4.1.1. Mode selection /suppression
The transverse magnetic (TM) and transverse
electric (TE) modes are computed separately, allowing either to be selected (or suppressed). Ex-

63

ploitation of null field components is one reliable


means of reducing CPU time. When field sources
and sinks are spatially invariant in one or more
directions, it is known a prio~t that certain field
components remain zero. A familiar example is
the waveguide, in which the orientation or frequency of a source permits only certain modes to
propagate. Waveguide modes are generally divided into TM and TE modes, where the longitudinal magnetic and electric fields, respectively,
vanish. Introduction of an ignorable coordinate
(transverse to the direction of propagation) restricts at least one of the mode numbers to zero,
e.g., TM 0 n modes in a cylindrical waveguide. For
example, in TE modes, the magnetic field oriented in the invariant direction and the transverse electric field oriented perpendicular to the
invariant direction are zero.
An alternative use involves suppressing a mode
to aid in interpretation of associated physical
effects, e.g., space-charge [22]. Since Gauss's law
in 2D involves only the TM mode, suppression of
this mode achieves ad-hoc elimination of all
space-charge effects.
In general, particle motion couples the TE and
TM modes through the current term in Ampere's
law. However, it is still frequently possible to
employ a single mode when, for example, the
particles are constrained to the plane of the simulation, or when symmetry in the ignorable direction implies cancellation of currents. The validity
of single mode employment can always be established by examining the vector properties in all
the relationships of Eq. (3.1) iteratively.

4.1.2. Particle noise filtering


A major difficulty for electromagnetic PIC
simulations, especially those involving relativistic
particles, is the high level of electromagnetic noise
which results from poor statistics, i.e., an inadequate number of macro particles. In principle,
this can be addressed either by increasing the
number of macro particles to reduce the source
of noise or by removing the noise from the fields
by "filtering." However, because noise goes inversely as the square root of the particle number,
filtering fields is usually more efficient.

64

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

Particle-induced noise is introduced through


the current term in Maxwell's equations. It is
fairly easy to distinguish, being generated at the
very shortest wavelengths (on the size scale of
particles and cells), whereas the physics of interest occurs at longer spatial scales, being (hopefully) well resolved spatially. However, if the noise
level becomes excessive, particle motion may be
altered, causing aliasing to the longer spatial
scales and contaminating the physics of interest.
Transverse particle noise generates propagating, wave-like, electromagnetic noise. This results
in large curl derivatives in Eq. (3.1). Transverse
noise is easily removed by filtering, using the
time-biased and high-Q algorithms described below.
Longitudinal particle noise is associated with
spatial fluctuations in space charge and the
Gauss's law constraint. These fluctuations drive
the slow, self-heating instability in the Debye
shielding regime (23]. Longitudinal noise is best
removed using one of the charge allocation algorithms discussed in Section 5.
4.1.3. Stability
In all of the time-domain algorithms, the time
step is limited by the Courant condition. The
Courant instability occurs at the very shortest
wavelengths resolved on the grid. The same
field-filtering techniques that reduce particle
noise also have the side effect of extending the
Courant condition. This permits some of the extra CPU demand from the filtering algorithms to
be recovered by using a larger time step.
4.1.4. Centered-difference and time-reversible algorithms
The simplest time-domain algorithm is the
commonly-used centered-difference, or leapfrog,
scheme illustrated in Fig. 3. Its chief advantages
are speed and simplicity. There is no damping at
any frequency; hence, the algorithm is highly susceptible to particle noise, as described above.
However, this same characteristic may be a virtue
in purely electromagnetic calculations, where
centered-difference is the option of choice due to
its speed.

In a time-symmetric variation developed by


Boris [24], the magnetic field is advanced half a
time step prior to and immediately after particle
advancement. The Lorentz force ~e.sult is identical to the average of time-centered magnetic
fields. With this algorithm, the entire field-particle evolution is exactly time-reversible (to machine precision), such that a simulation run backwards would return precisely to its initial state.
This was actually demonstrated in an early version of MAGIC.
4.1.5. Time-biased algorithm
The algorithm most often used for filtering
particle noise is a semi-implicit scheme due to
Godfrey [25]. At each time step, the results of the
basic algorithm are iteratively refined such that
field amplitudes are reduced in the upper spectral range, where transverse particle noise is introduced. The iteration is based on the following
divergence-conserving equations:
En+l,i

= (l -

x (a

r;)En+l,i-1

+ T;En,l

Bn+3/2,i-1+a Bn+1/2,i-t
2

+a 3Bn-l/2,i-l)],

( 4.1)

Bn+3/2,i =Bn+1;2,1 _ otVxEn+t,i.

In Eq. (4.1), the parameters a 1, a 2 , and a 3 determine the degree of spatial filtering and the timecentering, and the iteration coefficients, T;, must
be optimized for the most effective filtering. The
sum of the a 1 parameters must be unity.
For a time-centered solution, the a 1 parameters take on the values, i, t, and
and the
iteration is simply a Chebychev-accelerated,
Richardson-iterative solution of the implicit
equation, with. the r; coefficients playing the role
of the Chebychev acceleration coefficients. More
often, the iteration is used with a 3 = 0 in a timeuncentered, or time-biased, scheme which provides noise filtering at short spatial scales.
The filtering process can be easily understood
in terms of the spectral eigenmode decomposition, which for uniform, lD, cell spacing is identical to the Fourier decomposition. Combining Eq.

i,

B. Coplen et al. /Computer Physics Communications 87 (1995) 54-86

Range of
Physical Validiry

Range of
Electromagnetic I
Noise

1
I

1
~

High-Q Filter I
I

.....

0-+-------------------~~
0

Fig. 6. Time-biased and high-Q filter polynomials. The polynomial shape determines relative noise reduction in different
frequency ranges.

65

0) = 1, no filtering results at well-resolved spatial


scales.
For time-biased applicatio~s, the T; parameters are selected to give a best-fit polynomial to
the class of functions defined by P({3) = 1/(1 +
4a 1m 2{3 2 ), where m =cot/ox, which is the curve
labeled "time-biased filter" in Fig. 6. The degree
of filtering is controlled by the a 1 parameter,
with zero producing none, and numbers near
unity producing strong filtering. As a 1 rises, more
iterations are required to fit th~ filtering function. The best-fit, Chebychev-interpolating polynomial occurs when the coefficients are given by
1
7

(4.1) under these circumstances gives the equa-

= _l_+__2_a_
_ ( ___-_c-o~s_(-"IT=(=i=-=i=)/=/=)-)
1
1
( 1 - a1 ) 2
cos ( 1TI (21) )

tion

(4.3)

En+l,i = T;En+l,l

X Vx

+ [1 -

7; - 7iaif>t2(,e)-1

Vx] En+l.i-i.

( 4.2)

If {3 is the normalized eigenmode, e.g., f3 =


k/kmax' where kmax. is the maximum, spatially-resolvable Fourier wave number, then the time-biased iterations result in multiplication by a spectrally dependent polynomial, P(f3), such as that
shown in Fig. 6. The order of the polynomial is
equal to the number of iterations, and the shape
is determined from the a 1 and Ti parameters.
The form of Eq. (4.2) ensures that, since P({3 =

4.1.6. High-Q algorithm


The high-Q filter is so named because of its
ability to model high-Q, or nearly loss-free, resonant cavities. The fields in a resonant cavity oscillate with a fixed spatial distribution whose spatial
eigenvalue, f3res, is related to the frequency of
oscillation. When a noise-filtering algorithm is
used, the oscillation is slightly damped, according
to the value of the filtering polynomial, P(f3re)
In some low-Q cavities, the time-biased filter has
actually been used to represent the physical loss

High-Q Cavity

I
Fig. 7. Simulation of the plasma Wakefield klystron. The high-Q filter algorithm allows accurate modeling of saturation in the
high-Q cavity.

66

B. Coplen et al. /Computer Physics Communications 87 (1995) 54-86

Cathode

..

:.._ _____ ;. _____________~~~~-~~~-~: ______ ]

Center Line

Fig. 8. Simulation of focused electron diode. This electrostatic


solution is performed in spherical coordinates to preserve the
cathode surface shape.

terms [26]. However, for high-Q cavities, even a


very small departure of P(f3res) from unity can
cause damping to exceed actual physical losses,
thus resulting in incorrect saturation levels.
The high-Q filter provides a greater region of
near-unity behavior at the well-resolved spatial
scales near f3 = 0. The polynomial illustrated in
Fig. 6 is P(/3) (1 yf3 2 ) 2 (1 + 2y{3 2 ), which
contains a filter-level parameter, y, usually taken
to be about 0.85. The high-Q attribute comes
from the fact that the {3 2 term vanishes.
Fig. 7 illustrates a simulation of a 1 MW, 1
GHz plasma Wakefield klystron [27,28]. A wakefield effect [29,30] in a plasma-filled cavity modulates an intense beam of electrons, which then
propagates through a series of extraction cavities.
The first of these provides pulse shaping rather
than actual power extraction; i.e., the cavity is
tuned precisely in frequency and has very low
losses. To avoid breakdown, the saturation level
must be carefully adjusted. Simulations of this
nature would be very impractical without the
high-Q algorithm.
4.2. Quasi-static and electrostatic fields

In quasi-static or static problems, a dynamic


particle component forms static or slowly-varying
current and space charge whose electric and magnetic fields strongly affect the dynamic particle

motion, resulting in a nonlinear equilibrium. One


familiar application of this is the focused electron
diode illustrated in Fig. 8.
Time-domain algorithms can fie, and have
been, used to determine the static behavior of
such highly nonlinear devices. However, this is
often computationally disadvantageous compared
to more direct static methods, especially for lowvelocity particles. The time-domain resolves field
variation at speed-of-light time scales, rather than
at the slower particle time scales, and a very large
number of electromagnetic time steps may be
required.
4.2.1. Quasi-static algorithm
A quasi-static field algorithm has been implemented to address the time-scale disparity directly. The principle behind this field algorithm is
quite simple. Faraday's law is rewritten with an
ad-hoc multiplier, A2 < 1, according to

o1 B = -A2 V XE.

(4.4)

This reduces the effective speed of light to Ac.


Since Ampere's law and Gauss's law are unaffected, the static electric and magnetic fields are
unchanged and only the transient fields are affected. With the electromagnetics adjusted to the
same time scale as the particle dynamics, the time
step can be increased and nonlinear dynamic
equilibrium can be evaluated in a computationally favorable manner.
4.2.2. SOR and ADI algorithms
Electrostatic algorithms apply for static-charge
distributions. Poisson's equation is solved using
two iterative methods, successive over-relaxation
(SOR) and alternating-direction implicit (ADI).
Both algorithms are compatible with all coordinate systems, boundary conditions, and materials
used to compute electromagnetic fields. The
scalar potential is defined spatially localized at
the cell corners, or full-grid positions. The electrostatic fields are simply the gradient of the
converged scalar potential.
Electrostatic fields may be used to initialize
electromagnetic fields for transient simulation.
Such initialized fields will satisfy Gauss's law.
Also, since they derive from gradient of a scalar,

67

B. Goplen et al/Computer Physics Communicati,ons 87 (1995) 54-86

be solved either in the time domain using the


appropriate, fixed-frequency sources, or in the
frequency domain. When ther~ is little or no
damping in a structure, the .transient behavior
associated with multiple reflections complicates
the time-domain approach. The frequency-domain algorithm solves for eigenmodes and eigenvalues of the hyperbolic equation,
( lJJ

Fig. 9. Simulation of A6 relativistic magnetron. The A-K


voltage is maintained via an external circuit, representing a
TEM wave in the ignorable coordinate.

the electric field will be curl-free, ensuring a


smooth start electromagnetically.
Since the field distribution in a TEM wave is
obtained by solution of the Laplacian (31], the
electrostatic algorithms may also be used to generate the shapes of incident electromagnetic
waves. These 20 field distributions are employed
in several different ways. In 30, the fields can be
introduced through an incident voltage pulse [5],
as discussed in Section 6. In 20, they can be used
to introduce a TEM wave which propagates in
the symmetry coordinate. In this case, the field
strength is set by an external circuit, which adds
electrostatic fields incrementally to the dynamic
fields. This technique is used extensively for simulation of magnetron devices (32,33]. Fig. 9 illustrates the cross section of such a simulation; note
that the TEM wave has no obvious port of entry;
instead, it "propagates" in the ignorable coordinate.
4.3. Electromagnetic (frequency domain) fields
Steady-state, sinusoidal (CW) electric and
magnetic fields for a complicated structure can

JLe) - V x V x ] E

o.

(4.5)

The result is the wave amplitude for an electromagnetic oscillation at fixed angular frequency,
(J). These hyperbolic equations are solved with
iterative techniques; we note that the curl-curl
operations are identical to those required for
time stepping of the electromagnetic fields, especially in the treatment of boundaries, which is
often the most difficult aspect to implement computationally.
The three-dimensional frequency-domain algorithm is described more fully elsewhere (34]. The
solution uses much of the same software as the
time-domain algorithms because of the similarity
in mathematical operations. This frequency-domain algorithm is often used to compute cavity
eigenmodes and resonant frequencies for complex, three-dimensional structures [35].

5. Particles
Multiple particle species can coexist simultaneously, including electrons, protons, ions in a
variety of charge states, and arbitrarily-defined
particles. Here, a species is defined only by a
unique charge-to-mass ratio. Each macro particle
generally represents a unique number of physical
particles, depending on the time, place, and
statistics of the creation. Creation models are
available for most physical emission processes as
well as artificial mechanisms. For example, macro
particles may be populated in an initial state or
imported from other codes.
In addition to creation, the essential particle
operations are destruction, kinematics (motion),
and current density allocation. Destruction normally occurs when a macro particle strikes a solid

68

B. Goplen et al.j Computer Physics Communicati.ons 87 (1995) 54-86

material or passes out of the simulation. Kinematics determines the motion of individual macro
particles within the time step. Current densities
from the ensemble of macro particles are used by
Maxwell's equations. For each of these operations, there are multiple algorithms and options.
5.1. Particle creation

Macro particles are created in the simulation


either by an emission process or by entering
through a boundary. They may also be populated
in the simulation as an initial condition.
The creation of macro particles is inherently
statistical in nature. At present, macro particle
division or merging are not allowed, and simulation statistics are controlled solely through creation and destruction mechanisms. Statistical
functions allow select regions of phase space to
be emphasized (over-weighted statistically), while
still preserving the required physical properties.
Other options, common to all of the emission
processes, control other aspects of creation. For
example, an initial spacing option allows uniform,
random, or weighted macro particle placement,
both transverse and normal to the emission surface. (The normal spacing distribution simulates
continuous creation during a single time step and
improves longitudinal continuity.)

5.1.1. Plasma surface emission

Plasma surface emission can occur wherever


the local electric field is strong ..enough to cause
surface breakdown and plasma formation. The
resulting plasma surface can be. considered as a
metal with zero work function; hence, both electrons and ions may be "emitted" under the influence of the local fields. The phenomenological
algorithm, which is based upon Gauss's law, allows macro particle creation until the surface
field is reduced to some specified residual value.
Symbolically, this is represented as

where f is the plasma formation function, t b is


the time of cell breakdown, Er is the residual
field, p is the pre-existing charge density, and dx
is the cell height. Breakdown is initiated at a
particular cell when the field, Ee, exceeds a
threshold, Et. Once initiated at a cell, the process
is considered irreversible; the value of t b is
recorded, and the effective plasma area grows
progressively in time with f(t - t b), 0 <! < 1. Restrictions may be imposed to limit the minimum
macro particle charge and maximum current density. Virtually all material-dependent parameters
may be specified as functions of time and space.
This algorithm has proven to be quite robust
and is particularly well suited for magnetic insula-

Fig. 10. Plasma surface emission in the Aurora diode. Plasma formation occurs at the cathode tip, electron leakage characterizes
the moving front, and magnetic insulation follows behind.

B. Goplen et al/ Computer Physics Communications 87 (1995) 54-86

tion [36] and pulsed-power applications. It can


even be successfully applied in the presence of
other emission processes [37]. Fig. 10 illustrates
plasma surface emission (including curved surfaces) in a simulation of the AURORA diode
[38]. Parametric simulations helped to redesign
this x-ray simulator to achieve faster rise-time
performance.
5.1. 2. Field emission

Field emission, in which the energy required to


overcome a material work function is supplied by
an electric field, is described by the FowlerNordheim equation,

~ = AE? e (-Bv(y)
dA dt

<f>t( y ) 2 xp

Es

3 2

<f> 1

'

5
( -Z)

where Es is the cell field extrapolated to the


surface. The work function, </>, and the other
variables may be functions of time and the spatial
coordinates. This algorithm is particularly suitable for modeling the performance of field emitter tips [39,40].
5.1.3. Thermionic emission

In thermionic emission, the energy required to


overcome the work function is thermal. The current density is
(5.3)

69

The work function, </>, and temperature, T, may


be functions of time and spatial coordinates. This
algorithm can be used to m._qdel thermionic
cathode performance.
5.1.4. Photoemission
In photoemission, the energy required to over-

come the work function is supplied by incident


X-rays. In general, the resulting electron yield, ?J,
is a function of the material, the photon energy,
and the incidence angle, 8i. For low-energy photons (1-100 ke V), the yield is weakly dependent
on (Ji, and the electron distribution goes as the
cosine of the azimuthal angle, fJ, and is approximately independent of polar angle, <f>. Thus, the
creation algorithm is

dA dt dE sin( 8) d8
1

= -71s( t - ts) f( E) cos( 8),

(5.4)

'1T

where f(E) is the electron energy distribution


(which must be supplied separately and is often
calculated in an electron transport code). The
temporal function, s, depends upon the distance
from the photon source to the point of emission.
Both wave form and source location can be specified to model retarded-time effects, but the decreased yield associated with a spreading photon
flux is correctly accounted for only in three-dimensions.

ns

140

Fig. 11. Cross section of 425 MHz klystrode. Electron gun, cavity, and collector components were modeled simultaneously and as
separate components.

70

B. Goplen et aL /Computer Physics Communications 87 (I 995) 54-86

5.1.5. Secondary-electron emission

In secondary-electron emission, the energy required to overcome the work function is furnished by an incident electron. Secondary-emission models in MAGIC are presently generated
using advanced features in the command language to drive beam injection (see Section 5.1.6)
from a macro particle flux diagnostic (see Section
7.2). A phenomenological model for secondaryelectron emission is under development.
5.1.6. Beam injection

Beam injection specifies a beam to be emitted


from a metallic surface. It is used when the time
and spatial profile of the incident current density
is known, or to model a beam created outside the
simulation space. Gyro-kinetic beam injection is a
variation designed to model cyclotron auto-resonant masers, with special controls for the guiding
center radius and other parameters.
Beam injection was used to specify the timevarying, gated-emission beam used in the
klystrode amplifier simulation shown in Fig. 11.
Parametric variation of the bunch shape was used
to optimize amplifier gain and efficiency [26].
5.1. 7. Pre-ionization

A particular region of simulation space can be


initialized with a neutral or non-neutral plasma
[41]. If required, the associated field can be found
with a Poisson solver (see Section 4) to satisfy
Gauss's law. Fig. 5 illustrates an electron plasma
with a spatially-dependent temperature confined
in a multiple-cusp magnetic field. Pre-ionization
was used to distribute particle velocities in a
random, thermal-like manner approximating a
known spatial dependence. For the large plasma
chamber in Fig. 7, pre-ionization was used to
create a cold, neutral plasma of varying density in
the large chamber, including a laser-induced
plasma channel formed along the axis just prior
to beam injection.
5.1.8. Import/ export

Import and export connect two different simulations, performed in adjacent spatial regions. An
initial simulation in one spatial region of the

problem "exports" (writes a file containing) the


complete time history of macro particles crossing
a specified plane, along with the associated electric fields.
A subsequent simulation "imports" (reads the
file containing) this data to extend the spatial (or
temporal) domain. At each time step, macro particles that crossed the plane in the first simulation are added to the second. Transverse electric
fields on the plane are also set to match. If the
time steps in the two simulations are the same
and there are no backward waves, then -the result
will be identical with that produced by a single,
large simulation.
This capability is extremely useful in designing
multi-cavity devices, where it can be used to bring
individual cavities to saturation sequentially without including the preceding cavities. The CW
result from the last saturated cavity is simply used
repetitively as input for the next one.
5.2. Particle destruction

Macro particles are destroyed (removed from


the simulation) when they enter a solid material
(conductors, dielectrics, etc.) or penetrate certain
outer boundaries. (When they penetrate mirror
or periodic symmetry boundaries, they are not
destroyed but rather re-enter the simulation with
appropriately changed momentum and location.)
There are two destruction algorithms, referred to
as "soft kill" and "hard kill", which differ in their
sophistication, speed, and ability to conserve
charge near irregular surfaces.
In soft kill, each simulation cell is marked to
indicate whether macro particles are to be destroyed. If a macro particle is subsequently found
in a marked cell, it is simply deleted from -the
active list. No mathematical calculations are required, so soft kill is fast and simple. Particle
trajectories will occasionally be observed to penetrate a surface and terminate at some distance
below the surface. However, the current and
charge density calculation associated with motion
is straightforward (Section 5.4), and the macro
particle will also contribute correctly to specified
diagnostics (see Section 7.2).

B. Goplen et al/ Computer Physics Communications 87 (1995) 54-86

In hard kill, each cell is marked according to


whether macro particles might be destroyed, and
information is stored to indicate the locations of
all relevant surfaces associated with that cell.
Then, if a macro particle is subsequently found in
a marked cell, calculations are performed to determine all possible surface intercepts. If one or
more intercepts has occurred, the macro particle
final coordinates are set to match the earliest
intercept, and the particle is deleted. The particle
trajectory will be obseived. to terminate precisely
on the material surface. Current density calculations proceed as usual, but using the modified
final ci:xm~inates. Hard-kill destruction preseives
charge. conseivation in the presence of nonconformal (non-stair-stepped) surfaces and is thus
essential to successful field emission from irregular structures [42].

5.3. Particle kinematics

Particle kinematics is defined to include the


calculation of forces on the macro particles as
well as the motion due to those forces using the
Lorentz equation of Eq. (3.1). The force calculation includes spatial and temporal filters and
all.ows species to be treated independently. Motion may be computed relativistically and/ or
three-dimensionally, depending upon velocity, external field, and electromagnetic mode specifications.
5.3.1. Force algorithm
The force calculation has four steps: (1) spatial
filtering of the fields, (2) temporal filtering of the
fields, (3) addition of external fields, and (4)
macro particle coordinate weighting. Only those
field components needed by the kinematics algorithm will be calculated. The spatial filter averages discrete fields over the grid using a specific
mapping for each component, e.g., a six-point
filter for E 1 Following application of appropriate
boundary conditions to the spatially filtered fields,
an optional temporal (Kalman) filter can be applied. Then, any externally-generated (e.g., magnetostatic) fields are added to the filtered fields.

71

Finally, the forces on individual macro particles


are calculated by linear interpolation of the fields
in coordinate space.
5.3.2. Basic kinematic algorithm
The basic kinematic algorithm in MAGIC was
developed by Boris [24]. Each macro particle calculation is performed in a unique coordinate system defined by its own location, and the results
are then transformed back to the simulation coordinates. This scheme completely avoids the singularity problems which typically plague calculations in non-Cartesian systems. The impulse to
advance macro particle momentum is applied in
three steps: half of the impulse due to the electric
field, rotation of the momentum vector due to the
magnetic field, and the remaining half of the
electric field impulse. The final velocity vector is
computed from the relativistic momentum and
then integrated to obtain final coordinates.
Although somewhat involved, this general approach has proven to be both flexible and robust.
We have added other coordinate systems (e.g.,
spherical), transformations, and modifications for
speed and accuracy. There are currently eight
specific variations tailored to speed, dimensionality, and relativistic requirements. Still other options are discussed below.

5.3.3. Predictor-corrector option and multi-arc


methods
Normally, initial coordinates are used to calculate macro particle forces. The predictor-corrector option calculates forces using both initi~l and
final coordinates. This requires that the eomplete
kinematic cycle be performed twice. The first
cycle is used to estimate the final coordinates
(prediction) and to re-evaluate the forces for the
second cycle (correction).
Multi-arc kinematics provides accuracy and
stabilify when (L)cot > 1, where (L)c is the cyclotron
frequency. The multi-arc calculation divides the
application of the impulse into a number of rotations, or arcs. An odd number of arcs is always
used, and special care is taken to achieve proper
time-centering.

B. Goplen et al/ Computer Physics Communications 87 (1995) 54-86

72

5.4. Current-density algorithms

Current-density algorithms accumulate the individual contributions from the ensemble of


macro particles to compute a current-density field.
The specific components required depend on the
field and kinematic algorithms selected. For example, if a TM mode is specified, then every
particle with velocity in the ignorable coordinate
will be assumed to have a symmetry particle
moving in the opposite direction. In this case, the
symmetry component of current density vanishes
identically and no computation is required. Current densities may be segregated by species for
diagnostic purposes. The charge density field will
automatically be computed if it is required by any
other algorithm.
Options for computing current-density fields
utilize different spatial allocations, temporal filtering, and correction schemes. Three specific
combinations are discussed here. The first conserves charge exactly, but suffers from excessive
grid noise. The second conserves charge approximately, using a correction term. The third algorithm is used in conjunction with either of the
first two to broaden the spatial footprint. This
algorithm conserves charge only in a time-averaged sense.
5. 4.1. Charge-conserving algorithm
The charge-conserving algorithm automatically
satisfies Gauss's law in the local (i.e., cell) sense.
However, the very attribute which ensures conservation also creates short-wavelength spectral
noise in the current density. Since noise generally
goes inversely as the square root of the number

.:
cg
':'

f.:.

...
.

OJ.:

'

:::

. .

...
.
.

...
.
'

i::.:

. .

..
. . . . . . . . . . ..
~

J=r+r=

dq (xk+i_xi)
dx dy dz dt

dq (xi-xk)
dx dy dz dt'

(5.7)
where xk and xk+ 1 are the initial and final
macro particle coordinates and xi is the spatial
full-grid point, can be shown to satisfy Gauss's
law in one dimension.
Three-dimensional motion can then be described as a sequence of one-dimensional rectilinear translations. Fig. 12 illustrates two of six
possible paths in three-dimensional space. By the
argument above, each of the six paths will satisfy
Gauss's law. They differ only in order of motion
and in transverse weighting of the longitudinal
current density components. The particular path
selected for current-density allocation is determined randomly. (Alternatively, the six paths
could be averaged.)
That this algorithm satisfies local charge conservation can be demonstrated by computing the
deviation from Gauss's law over all simulation
time and space. In actual tests, errors were found
to be at the level expected due to machine roundoff. The test algorithm remains a permanent feature of the code to support future current-density
algorithm development and to encourage challenges.
5.4.2. Marder correction

f_ .. :.

.
...
.
.

of macro particles, reducing noise by numbers


can be impractical; instead, field filtering algorithms (see Section 4.2) can b.e. used to mitigate
this effect.
The charge-conserving algorithm can be understood by considering rectilinear motion and
then generalizing to three-dimensions. With an
equivalent density specification, the current allocation,

-~

Fig. 12. Two paths for current allocation. Six possible paths
connect the particle initial and final location in the chargeconserving algorithm.

The Marder algorithm [43] employs a correction calculation to approximate local charge conservation. Thus, rather than being constrained to
use a current allocation scheme that satisfies
charge conservation, we can choose a less noisy
scheme, and then correct the charge conservation
errors explicitly by adding currents that reduce
the conservation errors accumulated from previ-

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

ous time steps. A scalar field representing the


error in Gauss's law is computed immediately
after every electromagnetic field calculation. On
the subsequent time step, an error diffusion term
is added to the current density, or
p'=p-sV E,
J~J+Vp'.

73

self-heating in some simulations and may also


find applications where generic cooling of a
plasma is required.
The algorithm starts with the current allocations of either of the previous two algorithms and
then further adjusts them. The charge and current allocations are spatially filtered using

(5.8)

Essentially, this diffuses away the charge-density


error over time, thus approximately satisfying
Gauss's law. The current allocation scheme which
we use with this algorithm preserves the longitudinal components, but shares them linearly based
upon the midpoint of the differential motion.
This allocation algorithm does not require paths
or randomness.
5. 4.3. Cooling algorithm
The cooling algorithm was developed for the
express purpose of reducing longitudinal electric
field noise in thermal plasma simulations in which
the Debye length is unresolved. The algorithm
combines spatial and temporal filters. The time
filter smooths temporal fluctuations in the charge
density which would otherwise pump energy into
increasing electric field fluctuations. This algorithm has demonstrated an ability to stabilize

(5.9)
where a is chosen automatically to reduce the
short wavelength spectral content. This effectively broadens the footprint of each particle to
16 grid points, while the shape implicitly reduces
the undesirable spectral content. The approach
implicitly handles complex allocation of charge
near boundaries. In principle, the filter can be
applied more than once per time step for even
broader footprints; however, so far this has not
been done.
The filtering process creates short scale-length
charge-conservation errors which are corrected
using the error-correction scheme described previously. However, it uses the time-averaged error
in Gauss's law, rather than the instantaneous
value. (The time-averaged error is obtained by
applying a Kalman (RC) filter to the instantaneous error.)

Fig. 13. Geometry of the ASTERIX diode. This simulation illustrates various geometrical features (symmetries, incident waves,
etc.) and material properties (conductors, dielectrics, etc.).

74

B. Goplen et al/ Computer Physics Communications 87 (1995) 54-86

6. Geometry and materials


6.1. Boundaries

A boundary is defined as an interface between


adjacent spatial regions. Boundary conditions
must be imposed to limit the simulation region,
and their accuracy can have a profound impact
on fidelity.
6.1.1. Symmetries
Inherent mirror and periodic symmetries in a
problem can be exploited with a precise treatment of fields and particles, ensuring results
identical with those which would be obtained by
actually extending the space. Mirror symmetry
provides Neuman boundary constraints. Periodic
symmetry boundaries are used to study a section
of a perfectly repetitive system. They also provide
closure in polar coordinates, as illustrated by the
dashed line in the coaxial magnetron simulation
of Fig. 9.
6.1.2. Incident and outgoing waves
Both TM and TE waves can be introduced and
arbitrary outgoing waves allowed to escape
through any conformal boundary [5,44]. Formally,
fields at a boundary can be decomposed into
incident and scattered waves which propagate in
opposite directions with known phase velocities.
The incident wave is completely specified by a
temporal function, f(t), and by a transverse spatial function, e.g., E(x). The outgoing component
is computed by interpolation of adjacent fields in
retarded time after subtracting the specified incident wave. The phase velocity must be specified
correctly to achieve good accuracy, and any mismatch will cause some reflection. Thus, this is a
narrow-band boundary condition which does not
work well with multiple or unknown phase velocities.
Fig. 13 illustrates a simulation model of the
ASTERIX diode [45], in which a TEM voltage
pulse is introduced at a boundary immersed in
dielectric. In this case, the incident wave shape
and the phase velocity are well known, and excellent accuracy results.

6.1.3. Free-space boundary


By contrast, the so-called "free-space" boundary condition is a broad-band - method used to
absorb outgoing waves [44]. It is employed over a
spatial region (as opposed to a .boundary) by
applying an artificial, spatially varying conductivity to both electric and magnetic fields. In principle, symmetry of application eliminates the phase
shift and reflection associated with a physical
conductivity. Normally, the conductivity function
is applied only to transverse fiel4 components
and, for best results, should extend at least one
wavelength and increase in strength with propagation distance. However, our formalism provides
separate controls (and independent conductivity
functions) for all six field components to allow
neutralization of the space charge from particles
which may become embedded in the conducting
region.
The free-space boundary condition is not sensitive to phase velocity; thus, it is a good choice
for multi-mode simulations. Its primary shortcoming is the difficulty of launching particles and
incident waves through the conducting region.
6.1.4. Lindman boundary
The Lindman boundary condition [46,47] can
also absorb waves of unknown phase velocity and
multiple waves of different phase velocities simultaneously. In this method, the standard wave
equation is replaced with a unidirectional wave
equation based on the positive-direction dispersion relation,

(6 . 1)
The innovation of the method is in the way
that it generates a differential operator which has
the dispersion of the square-root term. This is
accomplished with an Nth order Pade approximation, decomposed into its partial fraction components. In operator form, each partial fraction is
represented as a separate field variable and satisfies its own wave equation with a source term
provided by the actual boundary field. The sum
involving the individual Pade terms provides the
correction to the field at the boundary. For nor-

B. Goplen et al/ Computer Physics Communications 87 (1995) 54-86

75

mal incidence, the approximation is always excellent. For highly oblique angles of incidence, a
large number of terms is 'needed to obtain satisfactory results.

and the defining surfaces will be mapped to the


spatial grid. At the other extreme, geometrical
features smaller than a cell can be represented.

6.1.5. Transmission line extensions


Another way to extend the simulation space is
to match one of the boundaries to a transmission
line [48]. The assumption is that the spatial region represented by the line is devoid of plasma,
so that the impedance properties can be specified
precisely. The conventional matching circuit is
linear, but we have also investigated parallel
matches to provide an RF extraction mechanism
in two dimensions. It is also possible to match
transmission lines to each other to form linear,
parallel, and series circuits.
One common use of the transmission line is to
join a static load to a dynamic simulation [49]. A
recent novel application involves phase-locking of
relativistic magnetrons. In this simulation, two
complete (0-2Tr) magnetrons are modeled simultaneously by expanding the azimuthal grid to 4 Tr
and employing two sets of periodic boundaries.
The two magnetrons are isolated electrically, save
for the transmission line connecting their respective ports.

6.2.1. Perfect conductors, resis#vity, and dielectrics


The simplest materfal is the perfect conductor.
All electric fields within the solid vanish (including tangential fields at the surface). Within a cell,
conducting. surfaces can be either conformal
(aligned with the cell edge) or diagonal (in a
straight line betweed opposing corners). The diagonal representation [42] avoids ''stair-step" geometry, which can affect phase velocity adversely.
Fig. 14 illustrates use of this capability in modeling the hemispherical tip of a pulsed-power diode.
Materials possessing resistivity (finite conductivity) may also be modeled. 'rhe conductivity may
be spatially varying and either static or dynamic
(i.e., an explicit function of space and time.) The
model induces currents in the material proportional to the electric fields, and the currents alter
the fields through Maxwell's equations.
Dielectric materials can be anisotropic and
vary over space. The displacement field is discontinuous at the interface between dissimilar dielectrics. At such boundaries, the mean displacement field is chosen to be the cell-averaged field.
Particles which encounter a dielectric are destroyed (see Section 5.2), and their charge is
deposited on the dielectric surface. The dielectric
model has been used to investigate high-power
microwave breakdown of dielectric interfaces [50]
and edge diffraction [51]. Fig. 13 illustrates two
dielectric applications in the ASTERIX diode
simulation. The upper section of the diode is

6.2. Material properties


A material is typically defined in terms of a
bulk effect applied over a spatial region. However, material properties are used to represent a
wide variety of physical effects. Solids can be
applied to arbitrarily shaped regions of space,

""'

"'

f'i

I\

Fig. 14. Hemispherical tip of the ASTERIX diode. The spatial grid required for the spherical surface is generated automatically.

B. Goplen et al/ Computer Physics Communications 87 (1995) 54-86

76

filled with oil having a relative dielectric constant


of ~.5, while the middle section contains grading
rings (perfect conductors) supported by Lucite
(dielectric constant of 2.5), which confines the oil
and provides a vacuum barrier.

(electrons, negative ions, and positive ions) and


includes ionization, avalanching, and attachment.
Primaries (high-energy particles~ .are usually represented using macro particles, in which case, the
local ionization rate is proportional to the magnitude of the absolute current density, I J I. The
number densities for all ioni.Zed species are calculated using either centered-difference or exponential-difference, the latter being an extraordinarily stable but expensive algorithm. The
avalanche coefficient, electron-attachment coefficient, electron mobility; alld other. air-chemistry
parameters are functions of air pressure and the
local electric field [54]. This algorithm is applicable in the pressure regime, 0.05 atm < P < 1 atm.
Note that the electromagnetic field algorithm automatically accounts for the fluid conductivity
when the air-chemistry module is activated.
Fig. 15 illustrates the use of air chemistry in
the Aurora drift tube. The purpose of the drift
tube is to sharpen the 45 nsec rise time of the 7
MeV, hollow, electron beam as it exits the diode.
Under vacuum, the beam exhibits Coulomb repulsion and cannot propagate. At one atmos-

6.2.2. Foils

The foil model allows definition of a thin,


conducting material which scatters particles which
pass through it. Foils are restricted to being conformal in the spatial grid. Transverse electric
fields on foil surfaces vanish; any particles which
penetrate may be scattered or possibly even destroyed. The particle kinematics include scattering [52] and energy degradation [53] computed
using the incident energy and momentum and the
foil thickness and material properties (mass,
charge number, and density).
6.2.3. Air chemistry

Air chemistry is consider~d to be another type


of material property. A gas with specified constituents and pressure is assumed to fill a specified spatial region. The model uses three fluids

a)

b)

....................... :::.. ::

....:,,,,......~~

Fig. 15. Air-chemistry effects in Aurora drift tube. (a) Beam propagation in vacuum. (b) Beam propagation in one atm air. Air
allows beam propagation and reduces rise time.

77

B. Goplen et aL /Computer Physics Communications 87 (1995) 54-86

phere, the successful transport is marked by


cross-overs due to over-focusing and thermalization.
6.2.4. Semiconductors

A drift-diffusion model is available for silicon;


it has been used to model avalanche breakdown
in a diode [55]. The model includes a two-fluid
representation of electron (n) and hole carriers
(p) with arbitrary doping profiles for donors and
acceptors. Each of the carriers satisfies continuity
equations of the form,

eo1 n = VJn+ a llnl + {3 IJPl-eR np -n;

--10
-ZD
l

\
/

(6.2)

ni

where current and carrier densities are related by


drift (mobility) and diffusion mechanisms, e.g.,
(6.3)
Carrier mobilities include both thermal and electric-field dependence. The electric field is derived
from alternating-direction, implicit solution of the
Poisson equation (see Section 4.3), including
donor, acceptor, and carrier densities.
Fig. 16 presents diode currents and currentdensity contours from a simulation designed to
test current-channeling effects in a reversed-bias,
p-n junction driven by a relaxation oscillator
(R, C, V) circuit.
6.2.5. Polarizer
It is possible to model certain three-dimen-

sional features in a two-dimensional simulation.


The polarizer model [56] couples TE and TM
fields in a particular way to represent an infinite
array of wires at some specified angle to an axis
of symmetry. Applications include wave-splitters,
antenna shields, waveguide screens, and helices.
Fig. 17 illustrates use of the polarizer model in
a simulation of the emission gated amplifier [57].
This device uses a. _helical structure to produce a
low-velocity wave which travels in synchronism
with and extracts energy from the eledron
bunches. Simulations predict that a highly-tapered
helix will give significant efficiency improvement;
an experiment to test this prediction is underway.

-...J

'"

;1~ .

i\

:\L:\
: ~
\

: '...

~---J

,,

1t 1

-1'-----'-----'----....___....,r..____..___~M
lO 0.0

0.1

Oo2
't

0.3

0.4

0 .5 0.6

(nsec)

Fig. 16. Electron current and current densities in a silicon


diode. (a) Electron current densities. (b) Electron current.
The current constriction may be indicative of negative differential resistance and second breakdown.

6.2.6. Sub-grid models

The basic idea behind sub-grid models is the


representation of physical detail much finer than
can be resolved by the ordinary spatial grid. This
approach is usually implemented by adjusting th~
gross cell properties so that the surrounding fields
will reflect the desired physical behavior. The
polarizer model described above may be regarded
as an example of a sub-grid model.
Some models, such as the thin dielectric coating [58], are quite complex, while others involve
relatively simple ideas. A "shim" model, which
distorts spatial cells in the vicinity of conductors,
has recently been used to model mode competition in a finely tapered waveguide [59]. In principle, this model can be extended to arbitrarily
shaped structures, while retaining the considerable virtues of an orthogonal grid.

78

--- -

..

..

. ~-.,,,,,,. ~

'

.............. .
.... "'-

. . . ._ - . .
.
.

Fig. 17

dx/ dz deviation (mrad)


-.34

a..
x

Fig. 18

.00

.34

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

Sub-grid models are particularly advantageous


in three dimensions as a means of avoiding the
resolution of fine structure. Some of the available
models include the capacitive gap, the dipole
drive, the membrane damper, and the strut [60].
The strut is perhaps the classic sub-grid example;
it models propagation along a conducting element with an arbitrary cross section which is
small compared to the spatial cell. (The dynamic
fields are coupled to and solved simultaneously
with auxiliary transmission line equations which
imparts the correct inductive and capacitive properties.)

7. Output

The utility of electromagnetic PIC lies in its


ability to provide physical insights and accurate
design predictions. To this end, MAGIC offers
general-purpose output capability which can be
configured to reveal virtually any research abstraction or design performance measurement.
Output serves three distinct functions: diagnostics, in-line analysis, and post processing. Diagnostics verify and validate the simulation model,
in-line analysis provides immediate parametric
results, and post processing allows extended, interactive data analysis.
7.1. Diagnostics
Diagnostics verify and validate. Verification
ensures that the simulation model represents the
structure as the user intended. It is achieved
primarily with displays of the geometry and grid;
color is used to represent different types of materials and boundary conditions.. Fig. 13 illustrates
the use of color to represent geometry and the
locations of boundaries.
Validation ensures self-consistency of internal
algorithms. It begins with general checking of

79

input commands and continues with implementation of specific diagnostic requests. The code
permits wide latitude in the selection of algorithms, and there are associat~d with these algorithms a correspondingly large assortment of internal diagnostics which may be exercised upon
demand. A simple example is the stability diagnostic, which interprets field algorithm specifications and parameters and searches the spatial
grid to determine compliance with the applicable
Courant criterion.

7.2. In-line analysis


A complete discussion of th~ in-line analysis
possibilities is beyond the scope of this article,
but some common measurements include beam
emittance, RF gain, power conversion efficiency,
s-matrix parameters, current, power, rise time,
quality factor, R/Q-factor, capacitance, inductance, impedance, phase velocity, dispersion, and
. frequency [61,62,63]. General analysis features
virtually eliminate the necessity for device-specific
tools. Measurements may be invoked with a single command or, in some cases, a combination of
commands.
Input controls are provided for specifying, rendering, and interpreting simulation output. For
example, field quantities may be displayed versus
time or space in color or black and white {gray
scale) using contour plots, perspective plots, vector plots, and x-y plots. Particle phase-space
output provides arbitrary combinations of particle
variables and even arbitrarily transformed variables. For example, momentum may be converted
to energy-per-particle in arbitrary units and plotted versus any other phase-space variable. Data
management controls include the option of selectively recording data for any plot, rather than
displaying it. Thus, data generated on one platform can be displayed on another.

Fig. 17. Polarizer qiodel of the emission gated amplifier. Recent simulation results predict that use of a highly tapered helix could
yield fundamental mode efficiencies as great as 50%.
Fig. 18. Beam emittance. Color display reveals phase-space rotation of electron bunch exiting an RF gun.

80

B. Coplen et al/ Computer Physics Communications 87 (1995) 54-86

As an example of the flexibility inherent in the


output commands, we consider a measurement of
time-averaged gain versus position, a potentially
valuable diagnostic for evaluating RF amplifier
performance. The specific expression required is

-------------L-------------1

I
I
I

-----------L-------------1

Gain( z,t)
=

21i'

10 log 10 -

lave

ft
t-lave

XB(r, z, t))/.

[o
dt ),., r dr z (E(r, z, t)

I
I
I

-.f;

-----L------

--------

ci

--------------~-'

(7.1)

The essential point here is that gain is not a


feature of the code. However, the complex integral of Eq. (7.1) may be obtained simply by using
a few general-purpose commands. The top portion of Fig. 17 illustrates results from such an
analysis.
Z3. Post analysis

The possibilities for post analysis run the gamut


from producing simple overlays, to extensive
analysis, to video animation. Fig. 18 offers a simple example involving the emittance of an electron bunch exiting an RF gun [64]. The emittance
contribution of each particle is shown as a function of position, with electrons from the front,
middle, and back of the bunch colored red, yellow, and blue, respectively. A phase-space rotation correlated with position is exposed by the
plot, revealing the dominant source of emittance.
Multi-stage collector design offers a more sophisticated example of analysis. In some devices,
the simulation transient is much longer than an
RF period. However, once saturation is reached,
data from a single cycle can be recorded and
post-processed. Fig. 19 illustrates an 1-V curve
produced using this technique to record particle
data at an amplifier outlet. Additional post processing was done to predict the optimum collector voltages as a function of the number of stages.
In another example, the klystrode collector shown
in Fig. 11 was simulated to saturation and one
RF cycle of data was recorded - in this case, the
particles striking the collector walls. Fig. 20 shows
the heating rate as a function of perimeter distance along the internal wall of the collector.

10.0

0.0

20.0

E (keV)
Fig. 19. Post-analysis for collector design. The I-V curve
associated with the spent-electron distribution can be used to
design stages and voltages.

Animation ,of simulation data has proven to be


highly effective in revealing the underlying physics
of a device. Videos are created by reading simulation data for a time step, reducing it to a set of

.a
0

tl'i

..
0

..;

':>

'.._;'

:i

------ --- ... -:-...-..... -- _... _____ ...... -- .............. __ :-... ......................... .
.....
....
....
.
-------- -------..... -- .......... -- ...............-- ............... - .......... .. ....................... .
..
.
...
...
..
...
.
...
..
...
..
..
...

...

"':""

.,,

.............. .., ------ -!'--- ................... "" -~ ..................... - ............................. .

...... - - - - - - ...................... _ _ _ ... _ _ _ _ - - .,.4. -

..
.

.................... -

..
.

... ... ..

___ .., _____ - - - - - " ' - - - - - - - - - - - - - -~- - .......... - - - - .. - ..

. . ............................ -

... ..... - .. .,.1 ............. .

~~---"""--.;__....._;\

o--..-----~-----~-...-~~----~~~-0.0

".

'J.2

'.J.3

0.4

S (m)
Fig. 20. Klystrode collector heating. These results revealed a
severe heating problem at the tip of the collector.

B. Goplen et al/ Computer Physics Communications 87 (1995) 54-86

pictures on a screen, recording the screen as one


or several frames of a movie, and then repeating
this process for subsequent time steps. The layout
of the screen for each frame is configurable
through input and may be altered during recording. Fig. 17 illustrates a frame from an emissiongated amplifier video. At the bottom, a highly
tapered helix converts energy from electron
bunches which enter at the left into an electromagnetic wave which exits at the. right (56]. The
top shows cycle-averaged conversion efficiency vs.
distance, and the middle shows the instantaneous, longitudinal phase space. Visual inspection of the phase-space overlap was the key to
designing the helix taper for maximum fundamental mode efficiency.

8. User interface
A. measure of user interface quality is the
productivity it provides for key modeling activities, such as modifying input to reflect design
changes, modifying the finite-difference grid, performing data exchange between applications,
post-processing the results, and documenting the
design activity. MAGIC provides a command language to facilitate these activities.
8.1. MAGIC command language

The MAGIC command language (MCL) provides flexible input and is also used to connect
MAGIC to graphical user interfaces and to other
engineering design tools. MCL features a FORTRAN-style syntax, including variable substitution and features which allow the execution of
complex mathematical and logical manipulations
without re-compiling.
Other MCL features include integer, real, and
character vari~bles, multiple-dimensional arrays,
functions, vector functions, arrays of functions,
do-loops, if-then-else-endif constructs, macro
(subroutine) calls, and the ability to invoke operating system commands. The built-in mathematical functions include all basic FORTRAN functions, Bessel and Hankel functions, their derivatives and roots, and a Gaussian random-number

81

generator. The ability to enter arbitrary functions


allows algorithm and simulation parameters to be
functiOns of internal variables _such as position
and time. Consequently, special material features
or physical effects can be invoked without software modifications.
MCL also supports the development of templates, or input files which translate key device
parameters into geometry and algorithm specifications. When modifications to the design are
desired, it is necessary to alter only the list of key
parameters, rather than the mor~ complicated
template. The template approach is useful for
quickly scanning a design space, e.g., changing
design parameters and recomputing the response
to achieve the desired performance. It also
records explicitly the definition of the key relationships between the elements of the design.
Other features facilitate connection to graphical interfaces and wrappers. For example, a conversion command allows data to be specified in
any mixture of units, e.g., inches, microns, mm,
etc. The command language automatically recognizes arrays of input values and creates the appropriate array size, thereby allowing transparent
interpretation of data from spreadsheet GUI
wrappers. Other commands simplify writing input
data files to external tools based on a template
specification of input requirements. MCL is available as a separate shell program to satisfy such
requirements.
8.2. Data management and standards

The continuing advances in CPU performance


inevitably lead to more data and greater emphasis upon data storage and access tools. At the
same time, portability, flexibility, and cost issues
across a wide range of user environments discourage the use of sophisticated data management
systems. Instead, data management in MAGIC is
based on simple file manipulations and formats
common to virtually all systems.
Each simulation generates a data file-set which
can be accessed later with the operating system
file manager, with the POSTER code, or with a
site-specific data manager. The file naming convention can be configured to make use of direc-

B. Goplen et al/ Computer Physics Communicatwns 87 (1995) 54-86

):ty trees or other identifying attributes, such as


.esign or run numbers. All files are sequential for
ortability. An ASCII format provides portability
etween platforms, while a binary format proides compactness. Data is recorded in three file
ategories (field, particle, and other) using a selfientifying format.
These self-identifying records constitute a rigrously defined data standard for electromagetic fields and charged particles which has reently been adopted for MMACE. The key,
igh-level software routines for reading and writ1g in this format [65] greatly facilitate data ex-

file

fdit

R62C32

~ew

!nsert

f _!lrmat

.Qata

Window

.Help

"
INPUTS:

ppm stack definition

l <-MAGNET HALF PERIOD--> j

..1.
.
....

Iools

change between modeling tools. Translators are


available to connect MAGIC with the POISSON
Group codes, electron gun codes s:gch as UGUN
and EGUN (66], and the parti~le accelerator code,
PARMELA (67]. Progress is also b_eing made to
link MAGIC with computer-aided drafting (CAD)
tools through a translation utility, with the goal of
providing conversion of geometry information. A
recently created facility translates a data subset,
including lines, arcs, and conic-arcs of I GES (initial graphical exchange standard) data [68], into
MAGIC-readable form. Other object. types are in
progress.

(CeD_Width)

..1.
.....
'
'

l:WZOOM~'~~t UNZOOM

Total_Number_of_Periods
Celt_Width
Magnet_Width
Magnet_0 uter_Diameter
Pole_Piece_Outer_Diameter

Magnet_lnner_Diameter

Hub_Outer_Diameter
Stack_lnner_Diameter

Hub_Width

Magnet:_Material
Pole_Piece_Materi.al

Axi.al_Fteld_Strength

!'i'""'UNITS"~l

45

0.133 r.c11es
0.009 nches
0.309 nches
0.309 nches

0.206 inches
0.1907 inches
0.1507 nches
0.008 inches

iion
Iron

1600 gaim
:~;:~;:'*'' /:F; '

i...;.;;;---...

~;!?J~~!::AY~
~:!rt::;:..~;..~::: ~::::-::.

.c11.)1;;:.,;:;;.;.~\i:~;.7:,~,1isi$.; .,;,~'::.ii:. 1i.,,_"-,;:_;. _ ~j(jj~~:

.
.

J.'.,..t;~J~~i.->ii:-:c;jNUM J .dE.;.;;.J,~ ...J&l~

Fig. 21. A parametric GUI. MS-Excel automatically feeds magnet calculations into MAGIC simulations.

B. Goplen et al /Computer Physics Communications 87 (1995) 54-86

8.3. Graphical user interfaces (GU!s)

MAGIC has been connected to both parametric and geometric GUis. In a parametric GUI,
the parameters of interest are displayed and modified with point and click actions, slide bars, radio
dials, etc., as supported by the Motif widget tool
set on workstations or advanced spreadsheet features on a PC. Parametric GUis provide immediate, direct information on the relevant design
parameters, but they are specific to a class of
device. In a geometric GUI, geometry is displayed, and point and click techniques are supplied for editing pictorially. Geometric GUis are
more general and therefore more useful when the
set of parameters of interest is still being determined.
Excel has been used to create a parametric
GUI under MS-Windows. A spreadsheet specifies all the options and parameters of interest and
a single button invokes a simulation sequence.
The spreadsheet data is passed automatically via
the clipboard to MAGIC, which parses the data
with MCL and initiates a simulation. Again using
MCL, POSTER automatically reads and processes the simulation results to create a summary
file which is displayed the spreadsheet. Fig. 21
illustrates a parametric GUI which controls a
POISSON group code calculation and translates
results to the MAGIC data format. The application is a particular type of magnet configuration
called a paired permanent magnet (PPM).
MAGIC has been connected simultaneously to
a CAD-driven geometry GUI, a motif-based
parametric GUI, and an icon- and menu-driven
control panel which coordinated the full specification of a device, including magnet, gun, electromagnetic, and thermo-mechanical simulation.
Geometry and some material specifications originate from the CAD drawing, which saves the
drawing in IGES format. The parametric GUI
specifies algorithms, the type of calculation, and
the use of optional input from gun and magnet
tools. Selection from the control panel menu invokes a batch process which (a) converts the
IGES data into MAGIC readable form, (b) runs
MAGIC from .a design template using the converted IGES data and the parametric data, and

in

83

(c) runs POSTER to create displays invoked by


other menu selections. Error and message handling capability passes information to a control
panel utility which displays messages and options
to the user interactively.

9. Conclusion

This paper has presented an overview of


MAGIC, giving emphasis to its configurability as
manifested in the selection of fiel9 and particle
algorithms, geometry, materials, physical processes, and diagnostics discussed and by the many
simulation examples presented. Over the past 15
years, MAGIC has evolved into a general-purpose, research and design tool. Much of its popularity can be attributed to the user-configurable
approach it brings to electromagnetic PIC simulation. This approach provides access to all of the
useful algorithms, techniques, and diagnostic features which have been developed over its history.
This concept continues to guide development.
The implementation of new models and algorithms is facilitated by a robust mathematical
foundation and software architecture. By adhering to basic mathematical fundamentals, new algorithms intrinsically tend to be compatible. The
architectural framework in which new models are
inserted will support the addition, rather than
replacement, of capabilities. The MAGIC Command Language interface supports a spectrum of
user capabilities, supplemented with spreadsheet
GUis and a large selection of templates. An
expert system and graphical GUI are being developed. Progress is being made in integration of
MAGIC with CAD applications as well as with
other simulation tools. Indeed, MAGIC is a fundame.ntal component of integrated environments
such as MMACE.
The MAGIC User's Group provides the impetus and unifying focus for the continued growth
of MAGIC. The Group represents an increasingly diverse base of use, experience, and driving
influences. Over the past five years, at least 170
individuals at 44 different sites have used and
contributed to the evolution of the code. This
activity expanded the envelope of applications,

84

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

while simultaneously placing even greater demands on robustness, accuracy, and productivity.
Thus, while retaining its role as a research tool,
the MAGIC code is now used for design work in
fields such as microwave source development. We
expect this trend and challenge to continue.
Acknowledgements

Over the years, support for the development of


MAGIC has been provided by a number of organizations, including Sandia National Laboratories, Phillips Laboratory, Army Research Laboratory, Defense Nuclear Agency, Naval Research
Laboratory, and Air Force Office of Scientific
Research. The MAGIC User's Group is supported by Dr. Robert Barker of AFOSR.
References
[1] C.A.J. Fletcher, Computational Galerkin Methods
(Springer, New York, 1984).
[2] C.K Birdsall and A.B. Langdon, Plasma Physics via
Computer Simulation (McGraw Hill, New York, 1985).
[3] B. Goplen, R.E. Clark and SJ. Flint, Geometrical Effects
in Magnetically Insulated Power Transmission Lines,
Mission Research Corporation Report, MRCjWDC-R001 (April 1979).
[4] B. Goplen, R.E. Clark, B. Goldstein and R. Stettner,
Three-Dimensional SGEMP Simulation of an Idealized
FLTSATCOM in SXTF, in 1980 Annual Conference on
Nuclear and Space Radiation Effects, 15-18 July 1980.
[5] B. Goplen, R.S. Coats, and J.R. Freeman, Three-Dimensional Simulation of the PROTO-II Convolution, in 4th
IEEE Pulsed Power Conference, 6-8 June 1983.
[6] Annual Meeting of the MAGIC User's Group, held in
conjunction with the 18th IEEE Intl. Conf. on Plasma
Science, Williamsburg, VA, 3-5 June 1991.
[7] B. Goplen, L. Ludeking, D. Smithe and G. Warren,
MAGIC User's Manual, Mission Research Corporation
Report, MRC/WDC-R-310 (April 1993).
[8] B. Goplen, L. Ludeking, J. McDonald, G. Warren and R.
Worl, MAGIC Reference Manual - Algorithms, Mission
Research Corporation Report, MRC/WDC-R-201 (October 1989).
[9] B. Goplen, L. Ludeking, D. Smithe and G. Warren, SOS
User's manual, Mission Research Corporation Report,
MRC/WDC-R-283 (October 1991).
[10] B. Goplen, J. McDonald and G. Warren, SOS Reference
Manual - Version October 1988, Mission Research Corporation Report, MRC/WDC-R-190 (March 1989).

[11] L. Ludeking, G. Warren and B. Goplen, POSTER User's


Manual, Mission Research Corporation Report, MRC/
WDC-R-245 (August 1993).
[12} MUG Shots - MAGIC User's Group Newsletter, Vol. l,
No. 1 (February 1992).
[13] G. Warren, L. Ludeking and D: Smithe,' User-Friendly
Wrappers for Tools Used in Millimeter and Microwave
Device Design, Bull. Am. Phys. Soc. 38 (1993) 2004.
[14) L. Ludeking, B. Goplen, D. Smithe and G. Warren,
Mainframe Computing on a PC, Bull. Am. Phys. Soc. 38
(1993) 1894.
[15] L. Ludeking, B. Goplen and G. Warren, PC MAGIC,
presented at: 1992 APS Conference, Seattle, WA, 16-20
November 1992.

[16] L. Ludeking, PC/POSTER, presented at: 1993 APS Division of Plasma Physics Conference, St. Louis, MO, 1-5
November 1993.
[17] Yee, IEEE Trans. Antennas Propagation 14 (1966) 302307.
[18] S.B. Swanekamp, J.P. Holloway, T. Kammash and R.
Gilgenbach, The Theory and Simulation of Relativistic
Electron Beam Transport in the Ion-Focused Regime,
Phys. Fluids B 4 (1992) 1332-1348.
[19] M.T. Menzel, H.K Stokes, User's Guide for the Poisson/Superfish Group of Codes, Los Alamos National
Laboratory, Los Alamos Accelerator Code Group, Report LA-UR-87-115, Los Alamos, New Mexico (January
1987).
(20] J.L. Waren, M.T. Menzel, G. Biocourt, H.K Stokes and
R.K Cooper, Reference Manual for the Poisson/Superfish Group of Codes, Los Alamos National Laboratory,
Los Alamos Accelerator Code Group, Report LA-UR87-126, Los Alamos, NM (January 1987).
[21] D. Smithe, Electrostatic Well Formation in the HEPS
Device, Mission Research Corporation Report, MRC/
WDC-R-240 (November 1990).

[22] Jin Choi, Private communication (1994).


{23] J.M. Grossman, S.B. Swanekamp and P.F. Ottinger,
Modeling of Dynamic Bipolar Plasma Sheaths, Phys.
Fluids B 4 (1) (1992) 44-55.
[24] J.P. Boris, Relativistic Plasma Simulation - Optimization
of a Hybrid Code, in: Proc. Fourth Conference on Numerical Simulation of Plasmas, Naval Research Laboratory (1970) p. 3.
[25] B.B. Godfrey and B. Goplen, Practical Evaluation of
Time-Biased Electromagnetic Field Algorithms for
Plasma Simulations, in Twenty-Second Annual Meeting
of APS, Division of Plasma Physics, 10-14 November
1980.
(26] K Nguyen, G. Warren, L. Ludeking and :S. Goplen,
Analysis of the 425-MHz Klystrode, IEEE Trans. Electron Devices 38 (1991) 2212-2220.
[27] K Nguyen, D. Smithe and J. Pasour, The Plasma Wakefield Klystron, in: 6th Natl. HPM Tech. Conf., San Antonio, TX, August 1992.
(28] K. Nguyen, D. Smithe and J. Pasour, The Plasma Wakefield Klystron, A High-Power Microwave Generator, Mis-

B. Goplen et al./ Computer Physics Communications 87 (1995) 54-86

(29]

[30]

[31]
{32]

[33]

[34]
[35]
[36]

[37]
{38]

[39]

{40]

[41)

[42]

[43]

/[44)

{45]

[46]

sion Research Corporation Report, MRC/WDC-R-266


(November 1991).
J.D. Miller, R.F. Schneider, DJ. Weidman, H.S. Uhm
and K. T. Nguyen, Observation of Plasma Wake-Field
Effects During High-Current Relativistic Electron-Beam
Transport, Phys. Rev. Lett. 67 (13) (1991) 1747-1750.
J.D. Miller, R.F. Schneider, DJ. Weidman and K.T.
Nguyen, Plasma Wake-Field Effects on High-Current
Relativistic Electron Beam Transport in the Ion-Focused
Regime, Phys. Fluids B 4 (12) (1992) 4121-4130.
J.D. Jackson Classical Electrodynamics (Wiley, New
York) 6th ed., p, 240.
H.W. Chan, C. Chen and R.C. Davidson, Numerical
Study of Relativistic Magnetrons, J. Appl. Phys. (December 1992).
C. "Chen, H.W. Chan and R.C. Davidson, Parametric
Simulation Studies and Injection Phase Locking of Relativistic Magnetrons, SPIE - The Int. Soc. Opt. Eng. 1407
(1991) 105-112.
G. Warren, IEEE Trans. Electron Devices 35 (1988) pp.
2027-2033.
I.S. Lehrman and G. Warren, Proc. 1990 Linear Accelerator Conference (1990) pp. 51-53.
S.E. Jones, High Power Accelerator and Magnetically
Insulated Ion Diode for Ion Ring Studies, Thesis (August
1991).
R.R. Burton, Numerical Simulation of Electron Extraction from a Plasma, Thesis (May 1991).
G.A Huttlin, M.S. Litz, M.S. Bushell, D.P. Davis, F.J.
Agee, A. Bromborsky, N.R. Pereira and D.M. Weidenheimer, High-Power Microwave Experiments at Aurora,
J. Radiation Effects 9 (2) (1991) 32-39.
J.P. Calame, H.F. Gray and J.L. Shaw, Analysis and
Design of Microwave Amplifiers Employing Field-emitter
Arrays, J. Appl. Phys. 73 (3) (1993) 1485-1504.
H.H. Busta, J.E. Pogemiller, W. Chan and G. Warren,
Experimental and Theoretical Determinations of Gateto-Emitter Stray Capacitances of Field Emitters, J. Vac.
Sci. Technol. Bll (2) (1993) 445-448.
S.B. Swanekamp, S.J. Stephanakis, J.M. Grossman, B.V.
Weber, J.C. Kellog, P.F. Ottinger and G. Cooperstein,
Charged Particle Flow in Plasma-Filled Pinched-Electron-Beam Diodes, J. Appl. Phys. 74 (4) (1993) 2274-2286.
B. Goplen, R. Worl, J. McDonald and R. Clark, A
Diagonal Emission Algorithm in MAGIC, Mission Research Corporation Report, MRC/WDC-R-152 (December 1987).
B. Marder, A Method for Incorporating Gauss' Law into
Electromagnetic PIC- Codes, J. Comput. Phys. 68 No. 1
(1987).
B. Goplen, Boundary Conditions for MAGIC, in:
Twenty-Third Annual Meeting, APS Division of Plasma
Physics, 12-16 October 1981.
B. Goplen, L. Ltideking and K. Nguyen, ASTERIX Simulations with MAGIC, Mission Research Corporation
Report, MRC/WDC-R-255 (June 1991).
E.L. Lindman, Free-Space Boundary Conditions for the

[47]

[48]

(49]

[50]
(51]

[52]
[53]
[54]

{55]

[56]

[57]

[58]

(59]
[60]

{61]

[62]

[63]

85

Time Dependent Wave Equations, J. Comput. Phys. 18


(1975) 66.
P.A Tirkas, C.A. Balanis and R.A. Renaut, Higher Order Absorbing Boundary Conditions for the Finite-Difference Time-Domain Method, IEEE Trans. Antennas
Propag. 40 (1992) 1215.
"- .
B. Goplen, J. Brandenburg and T. Fitzpatrick, Transmission Line Matching in MAGIC,.Mission Research Corporation Report, MRC/WDC-102 (September 1985).
S.B. Swanekamp, J.M. Grossman, P.F. Ottinger, R.J.
Commisso and J.R. Goyer, Power Flow Between a
Plasma-Opening Switch and a Load Separated by a High
Inductance Magnetically Insulated Transmission Line,
paper (unpublished).
S.E. Calico, High-Power Microwave Breakdown of Dielectric Interfaces,. Thesis (August 1991) (unpublished).
J. Watkins, A Study of Edge Diffraction on a Grounded
Dielectric Sheet by a Computational Method, presented
at: Eighth Ann. Rev. of Progress in Appl. Computational
Electromagnetics Conf., March 1992 (unpublished).
E. Segre, Nuclei and Particles (Benjamin, New York,
1965) p. 40.
Stopping Power for Electrons and Positrons, ICRU Report 37 (October 1984) Ch. 2.
B. Goplen, An Air Chemistry Algorithm for SOS, Mission Research Corporation Report, MRC/WDC-R-043
September (1983).
B. Goplen, J.. McDonald, A. Ward and J. Stellato, A
Two-Dimensional Code for Avalanche Breakdown in
Semiconductors, in: NASECODE IV Conference Proceedings, Dublin, Ireland June (1985).
D. Smithe and B. Goplen, 2D Particle-in-Cell Simulations of Emission Gated Traveling Wave Tubes, Bull.
Am. Phys. Soc. 38 (1993) 2004.
B. Goplen, D. Smithe, K. Nguyen, M. Kodis and N.
Vanderplaats, MAGIC Simulations and Experimental
Measurements from the Emission Gated Amplifier I&II
Experiments, in: 1992 Int. Electron Devices Meeting Technical Digest, Paper December (1992).
W.A Seidler, B. Goplen and W. Thomas, Investigation
of Enhanced Electron Current Transport in a DielectricLined Cavity, in: 1978 NEM Conference, Albuquerque,
NM, June 1978.
Hai Wu, Private communication, (1994).
B. Goplen, R.E. Clark, and SJ. Flint, Sub-grid Models in
the Cartesian SOS Code, Mission Research Corporation
Report, MRC/WDC-R-002 April (1979).
F. Friedlander, A. Karp, B. Gaiser, J. Gaiser and B.
Goplen, Transient Analysis of Beam Interaction with
Antisymmetric Mode in Truncated Periodic Structure
Using Three-Dimensional Computer Code SOS, IEEE
Trans. Electron Devices, 11, (1986) p. 1896.
K. Xu and G. Bekefi, Experimental Study of Multiple
Frequency Effects in a Free Electron Laser Amplifier,
Phys. Fluids B 2 (3) (1990) 678-680.
J.H. Booske, M.A. Basten, J. Joe, AH. Kumbasar, J.E.
Scharer, J. Anderson, B.D. McVey, R. True and G.

86

B. Coplen et al./ Computer Physics Communications 87 (1995) 54-86

Scheitrurn, Nonrelativistic Sheet Electron Beams for Microwave Devices, Bull. Am. Phys. Soc. 38 (10) (1993).
[64] I.S. Lehrman, I.A Birnbaum, S.Z. Fixler, R.L Heuer, S.
Siddiqi, E. Sheedy, I. Ben-Zvi, K.. Batchelor, J.C. Gallardo, H.G. Kirk, T. Srinivasan-Rao, .G.D. Warren, Nucl.
Instrum. Methods in Phys. Res. A 318 (1992) 247-253.
[65] G. Warren and D. Smithe, Particle and Field Data Exchange Standard for MMACE, Mission Research Corporation Report, MRC/WDC-R-309 April 1993.
(66] W.B. Hennannsfeldt, EGUN - An Electron Optics And
Gun Design Program, SI.AC-Report 331 October (1988).

[67] K.R. Crandall and L Young, PARMELA, in: The Compendium of Computer Codes for Particle Accelerator
Design and Analysis, eds. H. Deaven and K.C. Chan, Los
Alamos National Laboratory Reportr .i.A-UR-90-1776
May (1990) p. 137.
[68] The Initial Graphics Exchange Standard Version 5.0,
Report NISTIR 4412, US Department of Commerce,
National Institute of Standards and Technology, Center
for Building Technology, Gaithel:Sburg, MD.

....
~

_.iii

Miuion Relearch Corporation

8560 Cinderbed Road,


Suite 700
Newington, VA 22122

You might also like