Professional Documents
Culture Documents
Discrete Conservation Properties of Unstructured Mesh Schemes
Discrete Conservation Properties of Unstructured Mesh Schemes
21 : 345�5
ANNUAL
REVIEWS Further
Quick links to online content
NEW DIRECTIONS IN
COMPUTATIONAL FLUID
DYNAMICSl
Jay P. Boris
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
1 The US Government has the right to retain a nonexclusive, royalty-free license in and to
any copyright covering this paper.
345
346 BORIS
" "
Today, CFD solutions are often quite comparable to, or even exceed,
the accuracy and resolution of laboratory experiments. Nevertheless, since
engineers are often dissatisfied with existing experimental results and the
scalability of small experiments to full-scale systems, their reluctance to
trust the computers completely is quite understandable. Furthermore, no
matter how accurate a simulation is, it is not, and cannot be, the real
world. Indeed, it is through the differences between simulations, which
reflect our current mathematical theory of fluid behavior, and the real
world as seen through experiments that we learn new things about fluid
dynamics. Finally, although simulations are cost effective relative to lab
oratory, field, and wind-tunnel tests, the costs and limitations encountered
in developing and applying simulations to complex fluid dynamic prob
-
There is still a long way to go. Solving for steady-state flow, really only
a mathematical idealization, over a realistic representation of a complete
automobile, airplane, or ship is still a formidable task. The turbulence
models used (for example, additional coupled Reynolds-averaged stress
equations) are at best limited approximations and often computationally
expensive. In the real world almost all of these flows are unsteady in the
important aspects of vorticity shedding and acoustic-vortex or wave-vortex
interactions. These unsteady phenomena, in turn, invalidate the procedures
used to derive most statistical turbulence phenomenologies.
Finite-difference meshes of a few hundred thousand grid points to
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
resolve the flow variables are expensive when carried for even ten or twenty
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
thousand time steps, so computations with a million cells or more are rarely
attempted. Finite-element algorithms are receiving increased attention for
CFD because of their more general geometric capabilities, but the cost of
inverting the global matrices that they introduce in time-dependent flows
is quite prohibitive. Spectral algorithms for CFD problems, based on
global expansions, are argued to be more accurate than comparably
resolved finite-difference and finite-volume algorithms, but they are corre
spondingly more difficult to use in realistic geometries and generally
require more operations per mesh point. Full simulations of the Navier
Stokes equations are giving way to the application of Large-Eddy Simu
lations (LES) using monotone finite-difference or finite-volume algorithms
to treat fluid convection. The models can have viscous terms, but they
do not resolve the small scales of turbulence in most regions, replacing
statistically based turbulence models with locally time-dependent subgrid
turbulence models.
Problems in CFD today are tackled primarily on supercomputers such
as the Cray, Cyber, NEC, Fujitsu, and Hitachi machines, which rely on
vector registers and pipelines to obtain their speed and which have rela
tively few processors. These computers are adequate for some steady three
dimensional problems and some unsteady two-dimensional problems, but
they are generally too small, too slow, and too expensive to bring solutions
of unsteady three-dimensional problems within the range of most fluid
dynamicists.
Current attempts at unsteady three-dimensional (3-D) problems take
dozens to hundreds of hours on the biggest computers available and even
then have generally only limited spatial resolution. Extended simulations
of 3-D incompressible turbulence in a periodic box and in partly periodic,
wall-bounded channels (relatively idealized configurations) have been con
ducted on meshes of up to 432 x 80 x 320 cells by, for example, Spalart
(1986, 1 988), Kim et al. (1987), Moser & Moin (1987), and Metcalfe et al.
(1987) but at relatively great cost. Because only a few installations can
COMPUTATIONAL FLUID DYNAMICS 349
perform such calculations for more than a few thousand time steps, snap
shots of these computations are being archived in national data bases so
that a significant component of the CFD community can have access to
them.
While such idealized simulations have great value, there is an under
standable requirement for calculations of more complex configurations
with more complex physics carried to much longer times. To relate the
simulated behavior of a reasonable fluid system to the new mathematics
of chaos and nonlinear system dynamics requires at least a few thousand
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
vortex sheddings to "flesh out" the system's computed return map. This
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
Sections 2-5 of this review consider some of the new directions and
novel approaches currently on the horizon that attempt to circumvent
these limits. Four levels of new directions are identified and the strong
interactions between them discussed. New directions in representational
models of the fluid state are discussed in Section 2. These include cellular
automata, hybrid-cellular-automata, and molecular-dynamics approaches
as well as novel approaches to decimating the fluid-dynamic equations to
a few dynamically significant degrees of freedom. New approaches for
discretizing (representing) continuous functions within the context of the
standard continuum-fluid models such as the Euler and Navier-Stokes
equations are also considered.
Algorithmic extensions and new directions are discussed in Section 3.
One must recognize and adapt to the existence of limits on how accurately
a continuous function can be represented and convected through a discrete
grid. Imperfect but optimal solutions to the simple convection equation
provide an intrinsic upper bound on the accuracy that conventional
methods can obtain. Useful, quite general existing algorithms that
approach this performance already exist. New algorithmic directions
include the extension of these near-optimal monotone methods to fully
adaptive and unstructured gridding to simulate general flow geometries,
the use of spectral elements to obtain better accuracy in moderately com
plex geometries, and the use of fully Lagrangian algorithms with the goal
of avoiding numerical diffusion errors.
Section 4 focuses on evolving hardware and the related subject of par
allelism in computational fluid dynamics. There is a very close connection
between the representation and algorithms chosen for a computational
model and its implementation on a particular hardware system. Some
mathematical models of the fluid state are better suited to one type of
hardware than another, and novel numerical algorithms will increasingly
be aimed at "parallelizing" the model's implementation. Indeed, the model
choices are often dictated by the most efficient implementation of the
model on a particular computer.
COMPUTATIONAL FLUID DYNAMICS 351
Future directions in terms of problem complexity and novel applications
of CFD to experimental fl u id systems and observations are considered in
Section 5. These are the connections between CFD and the real world. They
involve more complex problems, the use of phenomenological models, the
potential use of artificial-intelligence programs in organizing, writing, and
running CFD models, and the increasing use of CFD technology in experi
ments and realtime systems. Since these four levels of new directions all
interact, the present is a very exciting time for CFD.
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
2. NOVEL REPRESENTATIONS
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
Consider pj, the cell value of p and Xi associated with some time t". This
can be
Each discretization consists of both a set of values and a rule for inter
preting these values.
Each of these interpretations has different properties, each approximates
some situations better than others, and each has different associated solu
tion algorithms. The number of possible CFD representations and cor
responding algorithms is enormous. The correct choice depends on many
properties of the problem being solved and the computer resources avail
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
able. In all cases, however, there is only a finite number of discrete values
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
in cell interface location from one cell to the next were specified, numbers
of considerably lower precision could be used and many more of them
could be stored in a fixed amount of memory. As an example, consider
4096 bits of memory segmented into 64-bit real numbers. Only 32 cells are
possible if both the cell spacing and the density values are allowed to vary
reasonably. Although each of the stored values is accurate to one part in
1014, a general density profile is probably accurately represented to no
more than a percent or so at any particular point.
Consider these same 4096 bits divided up into 256 li ne segments of 16
bits each. Each line segment consists of two 8 bit numbers giving the
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
position and density displacements from one node to the next on the
continuous curve defining the profile. Both the density values and the
node displacements are quantized in this alternate "double-delta" rep
resentation with an accuracy of one part in about 104, much coarser
than the cell values or locationsin the 64-bit conventional floating-point
representation. The overall specification of the profile, however, would be
at least 10 times more accurate because there are eight times as many
cells in the representation for the same number of bits. In addition true
discontinuities and even multivalued functions could be represented.
While there are also obvious drawbacks to this representation, such as
the need for careful attention to scaling and the inefficiency of current
supercomputers for executing this representation, these are no more pro
hibitive than the implementation of cellular-automata models on con
ventional computers. Indeed, it would be truly surprising if either I-bit
numbers (as the cellular-automata supporters claim) or 64-bit numbers (as
conventional CFD practitioners claim) turned out to be optimal.
Figure 1 Schematic of the 2-D hexagonal lattice-gas cellular-automata grid with typical
collisions illustrated.
that computers have finally become fast enough and large enough for
molecular-dynamics models to extend all the way to the fluid regime in a
practical manner.
The molecular-dynamics approaches to fluid dynamics usually take a
greatly simplified force law designed to minimize the computations in
exactly the same spirit but to a lesser degree than cellular-automata models.
The trade-off is computational simplicity in the cellular-automata models
versus physical veracity in the molecular-dynamics models. Greenspan
(1985) has considered a num ber of the basic questions associated with this
use of molecular-dynamics techniques for fluids, such as convergence to
continuum behavior as the number of particles is increased. Rapaport &
Clementi (1 986) have carried the approach even further to demonstrate
eddy formation in obstructed fluid flow, as shown with cellular-automata
models by Shimomura et al. (1 987).
The basic computational problems in this molecular-dynamics approach
are also the same as with cellular-automata models. The number of par
ticles required to reduce the level of statistical fluctuations to an acceptable
value in even small volumes of the fluid is usually unacceptably large. In
addition, there is another major computational problem in molecular
dynamics modeling, namely determining the neighbors that are near
enough to be significant. In the cellular-automata models this problem is
absent because only thc neighboring lattice sites are potential sources of
interaction.
problem has been the subject of much research and has been reviewed, for
example, by Hockney & Eastwood (1981).
The Monotonic Lagrangian Grid (MLG) is a new method designed to
beat the N2 problem (Boris 1986a, Lambrakos & Boris 1987, Lambrakos
et al. 1988). The MLG is a compact data structure for storing the positions
and other data needed to describe N moving nodes. A node with three
spatial coordinates has three indices in the MLG data arrays. The data
relating to each node are ordered in the memory locations indicated by
these indices such that nodes that are close to each other in real space are
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
The 16 nodes are shown at their irregular spatial locations in Figure 2, but
they are indexed regularly in the MLG by a monotonic mapping between
360 BORIS
0 p
"'- -- AI2. T- t<I_1--I- 414)
1{1, ) V 3,�) Vi.
I(V V l- I ,3 4 M 0 N P
II'. I- 2. �}.- -I"- r.tI- \
,11M 33}l"-t-... H 3 I K J L
y r--...G
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
V 7- , \
11: !t"r-� V F 2 G E H F
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
II U,2 lk [4, )
I Via. (. 1 1 B A C 0
BII 1/
11!f1-,"- A 1 2 3 4
2, )
space memory
Figure 2 An example of a two-dimensional Monotonic Lagrangian Grid.
the grid indices and the locations. Each grid line in each spatial direction
is forced to be a monotone index mapping. Nodes stored in the data
memory according to this prescription are at most two or three nodes
away from the neighboring nodes that can affect them. Thus, for gradient,
derivative, or force calculations, only a fraction of the possible node-node
interactions need to be considered. No search is necessary to locate near
neighbors, and the necessary logic and computation are ideal for parallel
or multiprocessing methods using this data structure. Computations of
interactions are only made between nodes located in a small contiguous
portion of computer memory. Although this approach results in com
puting interactions for a few distant nodes, it provides a substantial
reduction in computational cost. The computations can be vectorized
efficiently because nodes that are close to each other are indexed through
contiguous memory.
A construction algorithm that scales as N log N can be used to build an
MLG from randomly located nodes. An MLG can always be found for
arbitrary data, although it is not necessarily unique. Further, when node
motions in real space destroy some of the monotonicity conditions,
another, faster N log N algorithm exists to restore the MLG order. Using
these MLG algorithms has reduced the time it takes to do a molecular
dynamics calculation. On the eRAY, restructuring the MLG when the
nodes have moved appeciably typically takes 3-5% of each time step. In
COMPUTATIONAL FLUID DYNAMICS 361
addition, the MLG is fully vectorizable and can make optimum use of
parallel computers. Thus, it is also being investigated as the basis for fast
free Lagrange models of continuum fluid dynamics, as described in Section
3.5.
3. NOVEL ALGORITHMS
3.1 Numerical Solution Limitations
Accurate numerical simulation of a complex fluid system can only succeed
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
models the fundamental physics and fluid dynamics of the system. The
computational model must begin with the minimum set of processes and
mechanisms thought to reproduce the essential features of the known
analytic solutions and observations. Each of the major processes must be
treated by algorithms optimized to reproduce the qualitative and quan
titative aspects of that process accurately, flexibly, robustly, and efficiently.
The overall model is constructed by coupling these individually optimized
algorithms together. Detailed numerical simulations constructed this way
eliminate the need for phenomenological "fudge factors" when bench
marked against analytic solutions and careful experiments. Because they
focus on the underlying processes and mechanisms, these simulations make
much better fluid-dynamic "experiments" than do "kitchen-sink" models,
where a large number of approximate phenomenologies are thrown
together.
Fluid-dynamic convection in the absence of strong physical diffusion
effects is the most difficult flow process to simulate and thus is the pacing
limitation in CFD. The attempts to circumvent this primary difficulty have
spawned the extensive field ofCFD. Turbulence, mixing, and gas-dynamic
shocks are obvious practical instances illustrating this limitation. Extensive
research over the last four decades has continually sought improved algo
rithms to solve continuity equations using finite resolution grids. Different
representations of the convective process have been used, and many differ
ent algorithms have been proposed within each of these representations.
General background references include Courant et al. ( 1928), von Neu
mann & Richtymyer ( 1950), Godunov ( 1 959), Lax & Wendroff ( 1 960,
1 964), Harlow ( 1 964), MacCormack ( 1 9 7 1 ), Potter ( 1 973), Roache ( 1 982),
Gottlieb & Orszag ( 1 977), Anderson et al. ( 1 984), Fritts et al. ( 1 985), and
Oran & Boris ( 1 987).
The inevitable limitations imposed by computer size and speed are
reflected in the need for improved accuracy with a fixed number of degrees
of freedom (e.g. grid points) in the representation, and the need to treat
more general and realistic physical systems. To cope with these limitations,
362 BORIS
The first two of these approaches are, in fact, often used but cannot be
relied upon to give accurate solutions of fluid-dynamic equations. The
third approach is quite useful when additional information about the
solution is available. Each of the remaining ways of circumventing the
accuracy limits imposed by resolution requires additional degrees of free
dom in the numerical representation. Which of these last three approaches
to adopt is problem dependent and involves trading off many factors.
Sections 3.3-3.5 describe algorithms representing these three promising
avenues.
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
regions where there are abrupt changes in the variable values. Thus,
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
Figure 3 Finite-element FCT calculation of a shock over two irregularly shaped obstacles.
(Figure is courtesy of R. Lohner).
4. NOVEL IMPLEMENTATIONS
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
Today's supercomputers such as the Cray, ETA 10, NEC, Fujitsu, and
Hitachi rely on vector registers and pipelines to obtain their speed. In an
arithmetic pipeline the fetching of data from memory, the several stages
of each arithmetic operation, the storage of the computed results back into
memory (or vector registers), the loop-index increment, and the test for
completion of the loop are all carried out simultaneously for the sequential
components of a vector of similar operands. Because vector notation is well
suited to describing these operations logically, the term "vectorization" has
been adopted for the process of optimiZing algorithms to take full advan
tage of these computers. In most supercomputers, however, the term is a
misnomer, since each element of the vector is really calculated sequentially
using these hardware "pipelines" of memory controllers, routers, and
arithmetic and logical processors. Even the supercomputers with vector
registers do not have separate processors for each element of the vector of
operations to be performed. The vector registers are merely fast, inter
mediate locations to store the operands and results, ensuring optimal
sequential processing through the pipelines.
Conventional scalar computers are generally rated in millions of instruc
tions per second, or "MIPS," reflecting their scalar nature. For scientific
computing a more suitable measure has been millions of floating-point
operations per second or "megaflops." Today's fastest supercomputers
perform at levels that can exceed a "gigaflop" (109 floating-point oper
ations per second). Each flop requires 5 or 6 instructions, so there is a
corresponding factor between MIPS and megaflops. This factor of 5 or 6
in instruction count becomes translated into a factor of 1 0 or 20 in
execution speed because of additional economies possible using the
assumed regularity of the operand data in the supercomputer memory.
Nevertheless, current supercomputers are still operating essentially as
sequential computers.
It is not surprising that these sequentially pipelined computer archi
tectures have reached a stage of development where speed improvements
370 BORIS
are both difficult and expensive to obtain. The hardware in these computers
is pushing the physical limits on the speed of uniprocessors, yet they are
still too small and too slow to bring solutions of realistic three-dimensional
problems to most fluid dynamicists.
In the last several years, several different supercomputer architectures
have emerged. These machines are categorized technically in several ways:
(a) by the maximum floating-point operation count in megaflops or
gigaflops, (b) by the number of processors available (i.e. the "grain" or
degree of parallelism possible), (c) by the power of the individual pro
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
ing the processors, and (e) by the method of program coordination and con
trol. For this survey we consider five types of architectures for CFD
problems.
but also closely coupled and loosely coupled parallel computations. The
availability of such computers makes it necessary to develop represen
tations of the fluid state that are optimal on these machines and to re
think how we write algorithms and software for them. Luckily, CFD is
a very "parallelizable" discipline. Thus, significant progress is also
expected on the algorithmic and software side. Indeed, parallel processing
is currently finding its way into a number of applications in CFD.
Nature "solves" fluid problems in a fully parallel manner, so the most
effective CFD representations and algorithms seem to be those that rep
licate what nature does to a significant degree. Thus new numerical algo
rithms optimized for parallel processing as well as old ones recast in parallel
form are becoming available for both computational fluid dynamics and
molecular dynamics. Though some of the currently popular methods may
be eclipsed because of the difficulty of constructing efficient parallel
implementations, other models, better suited to the directions of parallel
processing, will take over their role.
Waterman ( 1 988) suggests differentiating parallel systems based on the
way that the computational elements communicate control information
and data with each other. Communication between the processors over a
common data bus represents a natural extension of the usual minicomputer
architecture. The advantages of this communications technology are sim
plicity and familiarity. The disadvantage is contention for the limited
communication bandwidth. With more than a few processing elements on
the data bus, some have to sit idle waiting for data while other elements
are allowed to communicate with each other, with the host (control)
processor, or with shared memory. This is like having a single phone line
connecting a number of offices.
The Graphical and Array Processing System (GAPS) is an example of
a hybrid supercomputer systcm assembled for computational fluid dynam
ics, molecular dynamics, and reactive flows. It was constructed from com
mercially available hardware and features the hierarchical parallelism of
a number of separate array processors with powerful vectorization in the
COMPUTATIONAL FLUID DYNAMICS 373
individual processors (see Clementi et al. 1 985, Boris 1 986b, Boris & Oran
1 988). GAPS is not quite as powerful (fast and capable of multiusers) as
a true supercomputer, but it does support direct graphic interaction with
the evolving simulations. Its configuration is becoming standard for bussed
array-processor systems.
GAPS is an asynchronous, multitasking, distributed-control system con
sisting of an APTEC 2400 I/O computer with 1 2 megabytes of additional
fast memory and about 3 gigabytes of online disk storage connected to a
VAX 1 1 /780. GAPS contains several major computational components,
including six Numerix MARS 432 array processors (maximum per
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
powers of two up to the processor that is NI2 away. Each link connects a
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
accurately.
Actually, there is no hard . distinction between the classical cellular
automata models and these hybrid versions, because floating-point num
bers of finite precision are discrete. Further, more complex cellular-autom
ata models (for example, in three dimensions) already require many bits
of data at each lattice site to specify the state of the system. The distinctions
can be viewed as a choice of whether to average over the real physical
fluctuations, however represented, before or after one discretizes the prob
lem for digital computation. As noted in Section 2.2, optimal represen
tations with a few bits of precision may well be found.
There are numerous examples of CFD algorithms that already satisfy
the criteria to be hybrid cellular-automata models, including some of the
most general and flexible of the monotone convection algorithms. These
algorithms are entirely local, and exactly the same instructions are used
for all the cells. For these models a computer could be built in which a
generic cubical block of 3-D computational cells can be stacked like "Lin
coln Logs" or "Lego" in any size or configuration. Inital data loadings of
the cells could specify distorted geometries in the sense of spectral elements,
and boundary conditions for each block could be chosen from a menu of
allowed situations. General geometries could be represented by block
structuring, and internal obstacles might be implemented by replacing
certain blocks with null-operation blocks or special-purpose blocks repre
senting different materials or fluids. Communication of data from one
block to the next could be coupled through the abutting faces of adjacent
blocks. Power could be delivered by conducting rods threaded through the
assembly along one of the coordinate directions. The physical phenomenon
limiting the size of such a special-purpose computer is heat conduction,
since the algorithms and communications for a hybrid cellular-automaton
are all local. As the linear size and hence spatial resolution are doubled,
the heat-generation rate would increase eightfold but the surface area to
radiate it would increase by only a factor of four.
376 BORIS
UJ
U
Z
UJ
...J
1
;j
CD
a:
;j
I-
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
o
a:
«
::t:
>
(J)
«
w
CHEMISTRY ..
fluid dynamics and other physical aspects of the problem are so difficult
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
Laboratory experiments and field observations have long been the main
approach to the study of fluid-dynamic behavior. Despite the decades of
research, understanding is still increasing rapidly because of recent strides
made in diagnostic, recording, and data-processing technology. The advent
of very localized, highly accurate, and virtually noninvasive diagnostic
techniques, generally made possible by the rapid development of laser and
computer technology in the last two decades, has underpinned these recent
advances.
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
small enough that buoyancy and inertial effects can be neglected. These
digital records also facilitate the direct comparison of CFD simulations
and the experiments.
Computer technology, through high-speed digitization and the use of
sophisticated graphics, is also allowing local quantitative measurements
of a flow field to be converted to flow visualizations. When data at enough
points in the fluid can be taken and recorded with a high enough repetition
rate, computers can interpolate the flow properties between the measure
ment points, allowing subsequent reconstruction of "visualizations" of
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
otherwise "invisible" fields like the fluid velocity, vorticity, or flow Mach
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
ACKNOWLEDGMENTS
This work was sponsored by the Office of Naval Research through the
Naval Research Laboratory, by the Defense National Agency, and by the
Defense Advanced Research Projects Agency. I would particularly like to
thank Ms. Lorraine Mundo for her help in preparing the manuscript, Dr.
David M osher for his valued suggestions toward improving the manu
script, Dr. Elaine Oran for her help in and dedication to the task of con
verting a body of work to a coherent reactive-flow modeling discipline,
and Drs. Timothy Coffey and Alan Berman for their support of CFD at
NRL throughout the years. Finally, I would like to acknowledge my
colleagues, many of whom are cited above, who have helped to leave a
lasting mark on how fluid-dynamic research is done through their numer
ous creative contributions to computational fluid dynamics.
Literature Cited
Anderson, D. A., Tannehill, J. C., Pletcher, bors" algorithm of order N using a mono
R. H. 1 984. Computational Fluid Me tonic logical grid. J. Comput. Phys. 66:
chanics and Heat Transfer. New York: 1-20
Hemisph ere/McGraw-H ill Boris, J. P. 198 6b . Supercomputing at the
Andrews, A. E. 1988. Progress and chal U.S. Naval Research Laboratory. In Opti
lenges in the application of artificial intel cal and Hybrid Computing, SPIE Vol. No.
ligence to computational fluid dynamics. 634, pp. 7-23. SPIE Inst. Adv. Techno!.
A IA A J. 26(1): 40-46 Boris, 1. P., Boo k , D. L. 1 976. Solutio n of
Book, D. L., ed, 1 98 1 . Finite-Difference the continuity equati on by the method of
Techniques for VeclOrized Fluid Dynamics flux-corrected transport. In Methods in
Calculations. New York: Springer-Verlag. Computational Physics, ed. B. Alder, S .
226 pp Fembach, M. R o tenberg, 16: 85-129.
Boris, J. P. 1 97 1 . A fluid transport algorithm New York: Academic
that works. In Computing as a Language Boris, 1. P., Oran, E. S. 1 988. The numerical
of Physics, pp. 171-189. Vienna: Int. At. simulation of compressible and reactive
Energy Agency turbulent structures. Proc. Joint US/Fr.
Boris, J. P. 1986a. A vectorized "near neigh- Workship Turbul. React. Flows, ROURn,
COMPUTATIONAL FLUID DYNAMICS 383
Fr., ed. B. G. Murthy, R. Borghi. Berlin: methods for numerical computation of
Springer-Verlag. In press discontinuous solutions of the equations
Canuto, C., Hussaini, M . Y . , Quarteroni, A., of fluid dynamics. Mat. Sb. 47: 271-
Zang, T. A. 1988. Spectral Methods in 306
Fluid Dynamics. New York: Springer-Ver Gottlieb, D., Orszag, S. A. 1977. Numerical
lag. 557 pp. Analysis of Spectral Methods: Theory and
Chen, H., Matthaeus, W. H. 1987a. Cellular Applications. Philadelphia: SIAM
automaton formulation of passive scalar Greenspan, D. 1985. Computer studies in
dynamics. Phys. Fluids 30: 1 235-37 particle modelling of fluid phenomena.
Chen, H . , Matthaeus, W. H. 1987b. New Math. Modelling 6: 273-94
cellular automaton model for magneto Gustafson, J. L., Montry, G. R., B enner, R.
hydrodynamics. Phys. Rev. Lett. 58: 1 845 E. 1 9 8 8 . Development of parallel methods
(Erratum in Phys. Rev. Lett. 59: 1 55) for a 1 024-processor hypercube. SIA M J.
Clementi, E., Corongiu, G., Detrich, J. H . Sci. Stat. Comput. 9(4): 1-32
1985. Parallelism i n computations i n Harlow, F. H. 1964. The particle-in-cell com
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
quantum and statistical mechanics. Com puting method for fluid dynamics. In
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.
press. Also 1988. US Nav. Res. Lab. Memo. Orszag, S. A. 1 980. Spectral methods for
Rep. 6174 problems in complex geometries. J.
Lax, P. D., Wendroff, B. 1 960. Systems of Comput. Phys. 37: 70-92
conservation laws. Commun. Pure Appl. Ortega, J. M . , Voigt, R. G. 1 985. Solution
Math. 1 3 : 2 1 7-37 of partial differential equati ons on vector
Lax, P. D. , Wendroff, B. 1 964. Difference and parallel computers . lCASE Rep. 85-
schemes for hyperbolic equations with 1, l nst . Comput. App!. Sci. Eng. NASA
high order accuracy. Commun. Pure Appl. Langley Res. Cent., Hampton, Va.
Math. 1 7: 381-98 Patera, A. T. 1 984. A spectral element
Lohner, R. 1987. An adaptive finite element method for fluid dynamics: laminar flow
scheme for transient problems in CFD. in a channel expansion. J. Comput. Phys.
Compu!. Meth. Appl. Mech. Eng. 61: 323- 54: 468-88
38 Potter, D. 1 973. Computational Physics. New
Lohner, R., Morgan, K., Zienkiewicz, O. C. York: Wiley
Rapaport, D. c., Clementi, E. 1 986. Eddy
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
put. Meth. Appl. Mech. Eng. 5 1 : 441--65 molecular dynamics study. Phys. Rev.
Lohner, R., Morgan, K., Vahdati, M . , Boris, Lett. 6(57): 695-98
J. P., Book, D. L. 1 988. FEM-FCT: com Roache, P. J. 1 982. Computational Fluid
bining unstructured grids with high res Dynamics. Albuquerque, NM: Hermosa
oluti on . Commun. Appl. Numer. Methods. Rodrigue, G., ed. 1 982. Parallel Computa
In press tions. New York: Academic. 403 pp.
MacCormack, R. W. 1 97 1 . Numerical solu Seitz, C. L. 1 985. The cosmic cube. Commun.
tion of the interaction of a shock wave ACM 28: 22-33
with a laminar b oundary layer. Proc. Int. Shimomura, T., Doolen, G. D., Hasslacher,
Con! Numer. Methods in Fluid Dyn., 2nd, B., Fu, C. 1987. Calculations using lattice
ed. M . Holt, pp. 1 5 1--63. New York: gas techniques. Los Alamos Sci: Spec. Iss.
Springer-Verlag 1 5 : 201-10
McDonald, B. E. 1988. Flux-corrected Spalart, P. R. 1 986. Numerical study of sink
pseudospectral method for scalar hyper flow boundary layers. J. Fluid Mech . 172:
bolic conservation laws. J. Comput. Phys. 307-28
In press Spalart, P. R. 1 988. Direct simulation of a
McDonald, B. E., Ambrosiano, J., Zalesak, turbulent boundary layer up to Re =
S. 1 985. The pseudo spectral flux cor 1 4 1 0. J. Fluid Mech. 1 87: 61-98
rection (PSF) method for scalar hyper Sweby, P. K . 1 984. High resolution schemes
bolic problems. Proc. Int. Assoc.for Math. using flux limiters for hyperbolic con
and Comput. in Simulation World Congr., servation laws. SIAM J. Numer. Anal. 2 1 :
11th, ed. R. Vichnevetsky, pp. 67-70. New 995-1 0 1 1
Brunswick, NJ: Rutgers Univ. Press Symon, K . R., Marshall, D. , Li, K . W. 1 970.
Metcalfe, R. W., Orszag, S. A., Brachet, M . Bit-pushing and distribution pushing
E., Menon, S., Riley, J. J. 1987. Secondary techniques for the solution of the Vlasov
instability of a temp orally growing mixing equati on . Proc. Con! Numer. Simulation
laye r. J. Fluid Meeh. 1 84: 207--44 of Plasmas, 4th, pp. 68-125. Washington,
Miller, K., Miller, R. 1 98 1 . Moving finite DC: US Govt. Print. Off. [No. 085 1 00059]
elements, part I. SIAM J. Numer. Anal. Trease, H. 1985. Three-dimensional free
1 8: 1 0 1 9-32 Lagrangian hydrodynamics. See Fritts et
Montgomery, D., Doolen, G. D. 1 987. Two a!. 1 985, pp. 145--5 7
cellular automata for plasma computa van Leer, B. 1979. Towards the ultimate con
tions. Complex Syst . 1 (4): 83 1-38 servative difference scheme. Proc. Int.
Morton, K. W., Baines, M. J., eds. 1 986. Con! Numer. Methods in Fluid Dyn. Lect.
Numerical Methods for Fluid Dynamics. Notes Phys., ed. E. Kraus, 170: 507-12.
Oxford: Clarendon New York: Springer-Verlag
Moser, R. D., Moin , P. 1987. The effects of von Neumann, 1., Richtmye r , R. D. 1950. A
curvature in wall-bounded turbulent method for the numerical calculation of
flows. J. Fluid Mech. 1 7 5 : 479-5 1 0 hydrodynamic shocks. J. Appl. Phys . 2 1 :
National Academy o f Sciences/National 232-57
Research CounciL 1986. Current Capa Waterman, P. J. 1988. Parallel processing
bilities and Future Directions in Com finds a place. Defense Comput. I: 24-28
putational Fluid Dynamics . Washington, Winkler K.-H. A., Chalmers, J. W . , Hodson,
DC: NatL Acad. Press S. W., Woodward, P., Zabusky, N. J.
Oran, E. S., Boris, J. P. 1987. Numerical 1987. A numerical laboratory. Phys. Today
Simulation of Reactive Flow. New York: 40(10): 28-37
Elsevier. 601 pp. Wolfram, S. 1985. Undecidability and in-
COMPUTATIONAL FLUID DYNAMICS 385
tractability in theoretical physics. Phys. 3 1 : 335-62
Rev. Lett. 54(8): 735-38 Zalesak, S. 1 98 1 . Very h igh order and
Woodward, P., Colella, P. 1984. The numeri pseudo spectral flux-corrected transport
cal simulation of two-dimensional fluid algorithms for conservation laws. In
flow with strong shocks. J. Comput. Phys. Advances in Computer Methods/or Partial
54(1): 1 1 5-73 Differential Equations, ed. R. Vichne
Yakhot, V., Orszag, S. A. 1986. Reynolds vetsky, R. S. Stepleman, pp. 1 2�34. New
number scaling of cellular automaton Brunswick, NJ: Int. Assoc. Math. Com
hydrodynamics. Phys. Rev. Lett. 56: 1691- put. Simulation
93 Zienkiewicz, O. c., Morgan, K. 1983. Finite
Zalesak, S. 1979. Fully multidimensional Elements and Approximation. New York:
flux-corrected transport. J. Comput. Phys. Wiley
Annu. Rev. Fluid Mech. 1989.21:345-385. Downloaded from www.annualreviews.org
Access provided by Sun Yat-Sen University on 12/12/21. For personal use only.