Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

2

Basic Atomistic Modeling

Atomistic modeling provides a fundamental description of the materials


behavior and materials deformation phenomena. Molecular dynamics simula-
tions represent the numerical implementation to solve the equations of motion
of a system of atoms or molecules, resulting in the dynamical trajectories of
all particles in the system. The purpose of this chapter is to present an intro-
duction into molecular dynamics modeling and simulation approaches. The
discussion includes the physical basics, numerical implementation and exam-
ples of atomistic models for specific materials, as well as a brief introduction
into multiscale simulation methods. We also review analysis methods, in par-
ticular visualization schemes that can be used to extract useful information
from large atomistic systems.

2.1 Introduction
Molecular dynamics simulation techniques are very widely applicable, for
a range of materials and states, including gases, liquids, and solids. The
first molecular dynamics studies were focused on modeling thermodynamical
behavior of gases and liquids in the 1950s and 1960s [81–84]. It was not until
the early 1980s when the first molecular dynamics modeling works were pub-
lished that were applied to the mechanical behavior of solids. As a consequence
of the general applicability of molecular dynamics, many of the methods and
approaches described in this book can also be useful for the study of gases and
liquids, as well as the interaction of those with solids in systems that contain
both solids and liquids.
The outline of this chapter is as follows. We begin with a presentation of
the basic formulation of molecular dynamics (sometimes also referred to as
“MD”). After discussing of the numerical strategies associated with molec-
ular dynamics, we introduce interatomic potentials that mimic the energy
landscape predicted by quantum mechanics (we emphasize here that quan-
tum mechanics will not be discussed explicitly in this book, and the reader
32 Atomistic Modeling of Materials Failure

Fig. 2.1 Molecular dynamics can be used to study material properties at the inter-
section of various scientific disciplines. This is because the notion of a “chemical
bond” as explicitly considered in molecular dynamics provides a common ground as
it enables the cross-interaction between concepts used in different disciplines (here
exemplified for the disciplines of biology, mechanics, materials science and physics)

is kindly referred to other literature). We briefly review statistical mechan-


ics concepts that provide the theoretical and numerical basis for property
calculation from the results of molecular dynamics simulations. We discuss
the calculation of temperature, measures for the geometry of a particular
atomic system, methods to analyze and display crystal defects, and some
correlation functions that enable one to predict transport properties from
molecular dynamics studies. As illustrated in Fig. 2.1, due to the fundamental
nature of the description of the material behavior, molecular dynamics can
be used to study material properties at the intersection of different scientific
disciplines.

2.2 Modeling and Simulation

The significance of the atomistic viewpoint for failure processes and the enor-
mous computational burden associated with such problems makes modeling
and simulation of failure a promising and exciting area of research. In this
section we discuss some fundamental concepts associated with model build-
ing and the solution of the particular numerical problems to be computed in
molecular dynamics simulations.
The terms modeling and simulation are often used in conjunction with
the numerical solution of physical problems. However, it is important to note
that the two words have quite distinct meanings. The term modeling refers
2 Basic Atomistic Modeling 33

to the development of a mathematical model of a physical situation, whereas


simulation refers to the procedure of solving the equations that resulted from
model development. Models are often simplifications or idealizations of rather
complex physical systems or phenomena. A key aspect of model development
is the ability to map the essential physics features of a system into a descrip-
tion. M.F. Ashby of Cambridge University used the example of a subway map
to illustrate the concept of model building [85]: A model is an idealization. Its
relationship to the real problem is like that of the map of the London tube trains
to the real tube systems: a gross simplification, but one that captures certain
essentials. The map misrepresents distances and directions, but it elegantly
displays the connectivity. The quality or usefulness in a model may be mea-
sured by its ability to capture the governing physical features of the problem.
Ashby states that all successful models unashamedly distort the inessentials in
order to capture the features that really matter. At worst, a model is a concise
description of a body of data. At best, it captures the essential physics of the
problem, it illuminates the principles that underline the key observations, and
it predicts behavior under conditions which have not yet been studied.
The concept of model building is illustrated in Fig. 2.2, here shown for the
subway system in the Boston area. The comparison of the left and right panels
illustrates how models can facilitate to capture the essential information and
features of a physical system.
The concepts of modeling and simulation are intimately coupled: Without
model development, a simulation can not be carried out. Often, the partial dif-
ferential equations itself that may be a direct result from model development
do not allow to draw significant conclusions for a physical system at hand,
until simulations are carried out. This is nicely summarized in a phrase coined
by Sidney Yip of MIT [86] who stated that Modeling is the physicalization of
a concept, simulation its computational realization.
Both modeling and simulation have their specific challenges. The tasks
associated with modeling requires insight into the physics of the system, its
constituents or the behavior of the particles. The setup of the simulation
requires knowledge in the field of numerical techniques that are suitable to
solve complicated systems of partial differential equations, or to make compu-
tation proceed fast on modern supercomputers. To make efficient use of results
from simulation, strategies need to be used to analyze and interpret this data.
As indicated earlier, the results of atomistic simulations are merely numbers
that represent the position and velocities of atoms at different time steps.
Making sense of this huge amount of information can be a daunting task.
This becomes more evident as the simulation sizes increase to systems with
billions of atoms. Even with today’s largest computers, system sizes with only
a few billion atoms can be simulated, whereas a cubic centimeter of material
already contains more than 1023 atoms.
This book provides illustrates techniques and approaches for both mod-
eling and simulation. The concepts will be applied to specific problems
of describing materials failure phenomena. However, the concepts can be
34 Atomistic Modeling of Materials Failure

Fig. 2.2 This figure illustrates the concept of model building. Panel (a) on the left
shows the physical situation of a map of the subway lines. This representation makes
it quite difficult to determine a strategy to use the subway system to travel from
the cities of Braintree to Revere, for instance. The model representation depicted
in panel (b) on the right enables one to determine quite easily which subway line
to take, where to change the subway line, and how many subway stops there are
in between. This example illustrates that even though the model representation on
the right misrepresents the actual distances and directions, it elegantly displays the
connectivity. This figure was created based on a snapshot from the Massachusetts
Bay Transportation Authority (MBTA) web site (URL: http://www.mbta.com/),
reprinted with permission from the the Massachusetts Bay Transportation Authority

transferred to other applications where similar methods could be fruitfully


applied, such as self-assembly, diffusion studies, or studies of phase transfor-
mation.

2.2.1 Model Building and Physical Representation

Together with data analysis, model building and finding an appropriate


physical representation is probably the most difficult task in computational
materials science. It is imperative that great care must be taken when models
are built, and when results of simulations are analyzed.
Atomistic simulations have proven to be a powerful way to investigate
the complex behavior of dislocations, cracks, and grain boundary processes
at a very fundamental level. Atomistic methods have gained an increasingly
important role and level of acceptance in modern materials modeling. One of
the strengths and the reason for the great success of atomistic methods is its
very fundamental viewpoint of materials phenomena. For instance, the only
physical law that is used to simulate the dynamical behavior is Newton’s law,
2 Basic Atomistic Modeling 35

along with a definition of how atoms interact with each other (for a discussion
of Newton’s law, see Sect. 3.1). Despite this quite simple basis, very complex
phenomena can be simulated. Unlike many continuum mechanics approaches,
atomistic techniques require no a priori assumption on the defect dynamics
or its behavior.
Once the atomic interactions are chosen, the entire material behavior is
determined. This aspect of atomistic modeling provides a terrific opportunity
to build insightful models, that is, models that capture the essentials, elucidate
fundamental mechanisms, and thereby provide an elegant representation of
the key principles that underline the key observations. While in some cases it
is difficult to find an appropriate interatomic potential for a material, atomic
interactions can also be chosen such that generic properties common to a large
class of materials are incorporated (e.g., describing a general class of brittle or
ductile materials). This approach refers to the design of “model materials” to
study specific materials phenomena. Despite the fact that such model building
has been in practice in fluid mechanics for many years, the concept of “model
materials” in materials science is relatively new. On the other hand, atomic
interactions can also be chosen very accurately for a specific atomic interaction
using quantum mechanical methods such as the density functional theory
(DFT) [87], which enables one to approximate the solutions to Schrodinger’s
equation for a particular atomistic model.
Richard Feynman has also emphasized the importance of the atomistic
viewpoint in his famous Feynman’s Lectures in Physics [88], where he stated
that the atomic hypothesis (or the atomic fact, whatever you wish to call it)
that all things are made of atoms – little particles that move around in per-
petual motion, attracting each other when they are a little distance apart,
but repelling upon being squeezed into one another [...] provides an enormous
amount of information about the world, if just a little imagination and thinking
are applied.”
This underlines atomistic simulations as a natural choice to study materials
at a fundamental level. This is particularly true for studies of materials failure!
The atomistic level provides the most fundamental, sometimes referred to as
the ab initio, description of the failure processes. Many materials phenom-
ena are multiscale phenomena. For a fundamental understanding, simulations
should ideally capture the elementary physics of single atoms and reach length
scales of thousands of atomic layers at the same time. This can be achieved
by implementation of numerical models of atomistic models on very large
computational facilities.

2.2.2 The Concept of Computational Experiments

An increasing number of researchers now consider the computer as a tool to


do science, similar as experimentalists use their lab to perform experiments.
Computer simulations have thus sometimes been referred to as “computer
experiments” or “computational experiments.”
36 Atomistic Modeling of Materials Failure

The art of a computational experiment is to (1) build an appropriate model


of the physical situation, and to (2) construct it in such a way that the results
obtained by numerical simulation of this model can be utilized to advance the
understanding of a particular phenomenon. The third important step is to (3)
analyze and interpret the results computational experiments to advance the
understanding of the simulated process.
Computational experiments must be set up with care. For instance, if a
model is highly accurate but contains a very large number of numerical param-
eters, it may be impossible to understand how these parameters are related to
a phenomenon of interest. Only after reduction of the large number of param-
eters into a simpler set of reduced variables it is possible to draw significant
conclusions about the behavior of the system. Finding these reduced variables
is a central aspect in designing clever computational experiments.

2.3 Basic Statistical Mechanics


Statistical mechanics provides methods that help us yo analyze molecular
dynamics simulation and to interpret results from these simulations. In par-
ticular, it leads to a direct link between an ensemble of microscopic states and
the corresponding macroscopic thermodynamical properties.
The conversion of the microscopic information to macroscopic observ-
ables such as pressure, stress tensor, strain tensor, energy, heat capacities,
and others requires theories and strategies developed in the realm of sta-
tistical mechanics. Atomistic data (e.g., the pressure tensor) is not valid
instantaneously but needs to be averaged over multiple configurations of the
microscopic system.
One of the most central and probably most useful theorems in the practical
application of atomistic simulation is the Ergodic hypothesis. The Ergodic
hypothesis states that the ensemble average of a property A equals the time
average (the symbol . describes an averaged variable):

AEnsemble = ATime . (2.1)

This is a most useful relation that enables the calculation of thermodynamical


properties by simply averaging over sufficiently long time trajectories (and
thereby measuring the appropriate ensemble properties).
The ensemble average of a property A is defined as
 
AEnsemble = A(p, r)ρ(p, r)dpdr, (2.2)
p r

with pi = mi vi as the linear momentum of particle i, and p = {pi } being


the set of all linear momentums in the system, for i = 1 . . . N . Similarly,
r = {ri } represents the position vector of all particles. The system state is
uniquely defined by the combination (p, r) since the Hamiltonian H = H(r, p).
2 Basic Atomistic Modeling 37

In (2.2), the function ρ(p, r) is the probability density distribution, which is


defined as  
1 H(r, p)
ρ(p, r) = exp − (2.3)
Q kB T
with    
H(r, p)
Q= exp − dpdr. (2.4)
p r kB T
To evaluate these expressions, we would need to known any possible state of
the system, characterized by all possible values of p and r. Obtaining this
information is very difficult, which immediately shows the significance of the
Erdogen hypothesis. The time average from a molecular dynamics simulation
can be calculated by
1 
M
ATime = A(p, r), (2.5)
M i=1
where M is the number of measurements taken.
The Ergodic hypothesis further lays the foundation for Monte Carlo tech-
niques. Whereas molecular dynamics generates trajectories over time, Monte
Carlo generates those within the constraint of an ensemble average. The
Ergodic hypothesis states that both viewpoints are equal, allowing one to cal-
culate thermodynamical properties using Monte Carlo. In this sense, Monte
Carlo schemes generate a number of possible microscopic states along with
specific probability densities ρ, which are then summed up discretely to obtain
an estimate for the observed variable.

2.4 Formulation of Classical Molecular Dynamics

In atomistic simulations, the goal is to predict the motion of each atom in the
material, characterized by the atomic positions ri (t), the atomic velocities
vi (t), and their accelerations ai (t) (see Fig. 2.3). Each atom is considered
as a classical particle that obeys Newton’s laws of mechanics. The collective
behavior of the atoms allows one to understand how the material undergoes
deformation, phase changes, or other phenomena by providing links between
the atomic scale to meso- or macroscale phenomena. Extraction of information
from atomistic dynamics can be challenging and typically involves methods
rooted in statistical mechanics.
The total energy of such as system is

H = K + U, (2.6)

where K is the kinetic energy of the entire system and U the potential energy.
The kinetic energy is a function of the kinetic energy of all N particles,
38 Atomistic Modeling of Materials Failure

Fig. 2.3 Molecular dynamics generates the dynamical trajectories of a system


of N particles by integrating Newton’s equations of motion, with suitable ini-
tial and boundary conditions, and proper interatomic potentials, while satisfying
macroscopic thermodynamical (ensemble-averaged) constraints, leading to atomic
positions ri (t), the atomic velocities vi (t), and their accelerations ai (t), all as a
function of time, for all particles i = 1 . . . N , each of which has a specific mass mi

1
N
K= mi v2i , (2.7)
2 i=1

and the total potential energy is the sum of the potential energy of all particles:


N
U (r) = Ui (r), (2.8)
i=1

noting that the potential energy of each particle Ui depends on the position
of itself and all other particles in the system, expressed by r = {ri } as defined
above. For now we leave this expression as an unknown.
The total energy H is also referred to as the Hamiltonian. We note that
K = K(p) and U = U (r), that is, the kinetic energy depends only on the
velocities or the linear momenta of the particles and the potential energy is a
function only of the position vectors.
To satisfy Newton’s law Fi = mi ai for each particle i in the system, the
following equation governs the dynamics of the system:

d2 ri dU (r)
mi 2
=− . (2.9)
dt dri
The right-hand side corresponds to the gradient of the potential energy, which
is the force (note that the potential energy of the system depends on the
positions of all particles r). Equation (2.9) represents a system of coupled
second-order nonlinear partial differential equations, corresponding to a cou-
pled system N -body problem for which no exact solution exists when N > 2.
2 Basic Atomistic Modeling 39

However, the equation can be solved by discretizing the equations in time. We


note that the spatial discretization for the problem is given by the atom size,
as discussed in Sect. 1.4.

2.4.1 Integrating the Equations of Motion

A simple solution strategy is to develop a stepping method that gives new


coordinates and velocities from the old ones, for each particle i, such as

ri (t0 ) → ri (t0 + ∆t) → ri (t0 + 2∆t) → ri (t0 + 3∆t) . . . (2.10)

A numerical scheme can be constructed by considering the Taylor expan-


sion of the position vector ri :
1
ri (t0 + ∆t) = ri (t0 ) + vi (t0 )∆t + ai (t)∆t2 + . . . (2.11)
2
and
1
ri (t0 − ∆t) = ri (t0 ) − vi (t0 )∆t + ai (t)∆t2 + . . . (2.12)
2
Adding these two equations together yields

ri (t0 + ∆t) = −ri (t0 − ∆t) + 2ri (t0 ) + ai (t)∆t2 + . . . (2.13)

Equation (2.13) provides a direct link between new positions (at t0 + ∆t) and
the old positions and accelerations (at t0 ). The accelerations can be obtained
from the forces by considering Newton’s law,
Fi
ai = . (2.14)
m
This updating scheme is referred to as the Verlet central difference method.
There exist many other integration schemes that are frequently used
in molecular dynamics implementations. In the following few sections we
summarize a few additional popular algorithms.

Leap-Frog Algorithm

In the Leap-frog algorithm, the positions are updated as


1
ri (t + ∆t) = ri (t) + vi (t + ∆t)∆t (2.15)
2
and
1 1
vi (t + ∆t) = vi (t − ∆t) + ai (t)∆t. (2.16)
2 2
40 Atomistic Modeling of Materials Failure

Velocity Verlet Algorithm

In the Velocity Verlet algorithm, the positions are updated as


1
ri (t + ∆t) = ri (t) + vi (t)∆t + ai (t)∆t2 (2.17)
2
where
1
vi (t + ∆t) = vi (t) + (ai (t) + ai (t + ∆t)) ∆t. (2.18)
2

2.4.2 Thermodynamic Ensembles and Their Numerical


Implementation

When the equations reviewed in the previous section are integrated, the
resulting thermodynamical ensemble is N V E, which means that the parti-
cle number N , the system volume V , and the total energy of the system E
remain constant throughout the simulation. Other thermodynamical ensem-
bles can be realized by modifying the equations of motion in an appropriate
way, leading to N V T or N P T ensembles. Table 2.1 shows an overview over
various thermodynamical ensembles.

Ensemble Ensemble name


NV E Microcanonical ensemble
NV T Canonical ensemble
NP T Isobaric–isothermal ensemble
µV T Grand canonical ensemble
Table 2.1 Overview of various thermodynamical ensembles (the parameter µ is the
chemical potential)

To illustrate the approach of modifying the equations of motion to obtain a


specific thermodynamical ensemble, here we briefly review a simple algorithm
to enable an N V T ensemble, the Berendsen thermostat. The approach is based
on the idea to change the velocities of the atoms so that the temperature
(which is a direct function of the atomic velocities, as discussed later on in
Sect. 2.8.1) approaches the desired value, mimicking the effect of a heat bath.
This is realized by calculating a rescaling parameter λ
  
∆t T
λ= 1+ , (2.19)
τ Tset−1

where ∆t is the molecular dynamics time step and τ is a parameter called


“rise time” that describes the strength of the coupling to the hypothetical
heat bath. The velocities are then rescaled according to this parameter, where
the new velocity vectors are given by
2 Basic Atomistic Modeling 41

vnew,i = λvi (2.20)

for each atom i.


Other approaches to enable the N V T ensemble include methods based
on Langevin dynamics and the Nose–Hoover scheme. For N P T ensembles,
the Parrinello–Rahman approach provides a popular choice for an algorithm.
In this method, in addition to adjusting the temperatures to approach the
desired control value, the pressure is adjusted by changing the cell size of the
simulated system.
States with high energy will occur less often, states with low energy more
often. Each microscopic state has a certain probability associated with the
corresponding energies. During the integration of the equations of motion,
molecular dynamics naturally samples these microscopic configurations and
provides a collection of snapshots that after averaging correspond to the
proper macroscopic state. The obtained trajectories can then be used to
calculate thermodynamical properties by simply averaging over the sampled
configuration without further weighting.
For example, a selection of N particles in a box at pressure P , temperature
T , and volume V has many microscopic configurations (p, r) that all corre-
spond to the same thermodynamical macroscopic state. Molecular dynamics
can for instance be used to calculate the pressure for a given temperature
and system volume, by solving the dynamical evolution of the system over
long time scales. In this spirit, molecular dynamics enables one to sample the
phase space for admissible configurations, and giving them proper weighting.
According to the Ergodic hypothesis (see (2.1)), the molecular dynamics time
average is equal to ensemble average, and system properties can be calculated
with both methods. The results will be identical.

Fig. 2.4 Schematic of the atomic displacement field as a function of time. The
atomic displacement field consists of a low-frequency (“coarse”) and high frequency
part (“fine”)
42 Atomistic Modeling of Materials Failure

A critical step in solving the dynamical equations (for instance, using the
Velocity Verlet scheme) is to consider the size of the time step. Figure 2.4
depicts a schematic of the atomic displacement field as a function of time.
As can be seen, the displacement history consists of low- and high-frequency
contributions, where the total displacements can be written as

u(t) = u(t) + u (t), (2.21)

with u (t) as the fine contribution and u(t) as the coarse part. To solve the
equations of motion, the fine part needs to be discretized, which results in
a significant computational burden as most systems require time steps of
approximately 1 fs or 10−15 s to discretize the rapid oscillations of u (t). Inter-
atomic bonds that involve relatively light hydrogen atoms sometimes require
even smaller time steps on the order of 0.1 fs.
There are also adaptive techniques that are based on the idea to dynami-
cally change the time step in a simulation, depending on the maximum atomic
velocities [18]. Such approaches may help to increase the efficiency of molecular
dynamics studies without adversely affecting the results.

Fig. 2.5 Example of harmonic oscillator with spring constant k = φ (r = r0 ), used
to extract information about the time step required for integration of the equations
of motion. The dashed line shows the (nonlinear) realistic potential function between
a pair of atoms, of which the harmonic oscillator is the second-order approximation.

To estimate the time step for a particular system, one can estimate the
oscillation frequency of a harmonic oscillator:

1 k
ν= (2.22)
2π m
2 Basic Atomistic Modeling 43

where k is the spring constant, given by the second derivative of the potential
function with respect to the diameter (k = φ (r = r0 )), evaluated at r = r0 .
The function φ describes how the energy of the bond changes as a function of
radius r. This is schematically shown in Fig. 2.5. The time step should then
be chosen
1
∆tmin (2.23)
ν
In summary, the time step ∆t needs to be small enough to model the vibra-
tions of atomic bonds correctly. The vibration frequencies may be extremely
high, in particular for light atoms as

∆tmin ∼ m (2.24)

The stiffer the bond is around its equilibrium position, the lower the critical
time step as
1
∆tmin ∼ √ . (2.25)
k
The fact that the time step is on the order of several femtoseconds has major
implications on the time scale molecular dynamics can reach. For example,
approximately 1,000,000 integration steps are needed to calculate a trajectory
that covers 1 ns, providing a severe computational burden. Further, the time
step typically cannot be varied during simulation. The total time scale that can
be reached by molecular dynamics is typically limited to a few nanoseconds.
Some exceptionally long simulations have been reported that cover up to a few
microseconds. However, such simulations typically take up to several months.
This aspect of molecular dynamics is sometimes referred to as the time
scale dilemma. Even though the number of atoms in a simulation can be
easily increased by adding more processors (e.g., using parallel computing),
time cannot easily be parallelized. As can be see in the updating scheme (2.13),
an atomistic system is generally not independent in time: The behavior at t0
influences the state at t1 > t0 , and the time stepping cannot be carried out
independently, on multiple processors.
Several researchers are currently developing techniques such as the tem-
perature accelerated dynamics method, parallel replica method, and many
others to overcome this limitation, and make use of massively parallelized
computing to expand the accessible time scales [89]. Some of these techniques
will be discussed later.

2.4.3 Energy Minimization

Energy minimization is an approach during which the potential energy of the


system, at zero temperature, is minimized. Energy minimization corresponds
to the physical situation of cooling down a material to the absolute zero
point. Methods in which the deformation behavior of a material or structure is
probed during continuous energy minimization is also referred to as molecular
44 Atomistic Modeling of Materials Failure

statics. Such approaches have been used to study dislocation nucleation from
crack tips or the deformation of carbon nanotubes. It mimics a quasistatic
experiment, albeit neglecting the effect of finite temperature.
A variety of algorithms exist to perform energy minimization, most notably
conjugate gradient methods or steepest descent methods. Figure 2.6 depicts an
example result of an energy minimization, showing how the potential energy
of the system decreases systematically with the number of iteration steps and
finally converges to a finite value.

Fig. 2.6 Example result of an energy minimization, here an example of minimiz-


ing the structure of a solvated protein (lysozyme). As the number of iterations
progresses, the total potential energy decreases, until it converges and reaches a
constant value (see [8] for further details)

2.4.4 Monte Carlo Techniques

Statistical mechanics provides a theoretical framework to link a number of


microscopic states to macroscopic thermodynamical variables. To achieve this
link, one needs to obtain samples of microscopic states, for instance by using
a dynamical simulation (e.g., molecular dynamics).
An alternative approach to generate this data is to calculate the appro-
priate ensemble averages directly. This is done in Monte Carlo techniques by
sampling phase and state space.

Monte Carlo Techniques: Brief Introduction

The key task in computing the appropriate ensemble average is achieved


by repeated random sampling. This is achieved by generating system states
2 Basic Atomistic Modeling 45

Fig. 2.7 Illustration of generation of random perturbation from an initial state A


toward a state B, as typically performed in Monte Carlo schemes

according to suitable Boltzmann probabilities [90, 91]. The procedure can be


summarized as follows (Metropolis–Hastings algorithm):
1. Draw random numbers from a random number generator.
2. Advance system according to these random numbers (e.g., for the case of
a molecular structure, move atoms accordingly, as illustrated in Fig. 2.7).
3. Accept or reject new configuration, according to an energy criterion.
4. If N < NA , back to 1. Otherwise, continue with 5.
5. The set of configurations obtained based on this scheme is used to calculate
ensemble properties.
Many Monte Carlo algorithms employ such a procedure to determine a new
state for a system from a previous one. Thereby, the specific moves can be
chosen arbitrarily, which makes this method very widely applicable. However,
a drawback or limitation of this method is that it requires additional knowl-
edge of the system behavior, in particular how the system may evolve as it is
required for the generation of new configurations.
Figure 2.8 summarizes one of the most popular Monte Carlo schemes, the
Metropolis–Hastings algorithm.

Comparison of Monte Carlo and Molecular Dynamics

In contrast to Monte Carlo, molecular dynamics enables one to obtain actual


deterministic trajectories, and thus provides detailed information about the
full dynamical trajectories. Molecular dynamics can model processes that are
characterized by extreme driving forces and that are nonequilibrium processes.
A prominent example for which molecular dynamics is particularly suitable
is fracture. Provided expressions exist for the atomic interactions, molecular
dynamics modeling provides an excellent physical description of the fracture
processes, as it can naturally describe the atomic bond breaking processes.
Other modeling approaches, such as the finite element method, are based on
empirical relations between load and crack formation and/or crack propaga-
tion. In contrast, molecular dynamics does not require such input parameters.
46 Atomistic Modeling of Materials Failure

Fig. 2.8 Summary of the Metropolis–Hastings Monte Carlo algorithm. Please see
Figure 2.7 for an illustration of how state B is generated based on a random pertur-
bation from state A. The procedure is repeated NA times, the number of iterations.
The number of steps is chosen so that convergence of the desired property is achieved

In principle, all parameters required for a molecular dynamics simulation can


be derived from first principles, or quantum mechanical calculations (see also
discussion above). Monte Carlo can typically only be used for equilibrium
processes, as it does not provide information about how a system goes from
an unstable state A to a stable state B.
Many materials deformation processes such as fracture are examples for
such phenomena. Thus, in the remainder of this book we focus on molecular
dynamics methods, as they are capable of providing insight into the fracture
mechanisms, which is an important aspect of modeling and understanding
how materials fail under extreme conditions.

2.5 Classes of Chemical Bonding


The behavior of molecules is intimately linked to the interactions of atoms,
which are fundamentally governed by the laws of quantum chemistry. In met-
als, for example, bonding is primarily nondirectional, and can be characterized
by positive ions embedded in a gas of electrons (this is often referred to as the
electron gas model). Other materials show greater chemical complexity – often
2 Basic Atomistic Modeling 47

Fig. 2.9 Schematic of the typical characteristic of a chemical bond, showing repul-
sion at small distances below the equilibrium separation r0 , and attraction at larger
distances

featuring many different chemical bonds with varying strength, such as exem-
plified in materials including cement, proteins, or ceramics, or at interfaces
between metallic systems and organic components.
Despite the differences between different chemical bonds, many atom–atom
interactions show similar characteristic featyres. Figure 2.9 depicts the typical
characteristic of a chemical bond, showing repulsion at small distances below
the equilibrium separation r0 of a pair of atoms, and attraction at larger
distances.
In general, for any material we must consider the interplay of chemical
interactions that include, ordered by their approximate strength:
• Covalent bonds (due to overlap of electron orbitals, e.g., found in carbon
nanotubes, C–C bond, organic molecules such as proteins)
• Metallic bonds (found in all metals, e.g., copper, gold, nickel, silver)
• Electrostatic (ionic) interactions (Coulombic interactions, e.g., found in
ceramics such as Al2 O3 or in SiO2 )
• Hydrogen-bonds (e.g., found in polymers, proteins), as well as
• Weak or dispersive van der Waals (vdW) interactions (e.g., found in wax).
Electrostatic interactions can be significantly weakened by screening due
to electrolytes, which can lead to interactions that are weaker than vdW
interactions.
48 Atomistic Modeling of Materials Failure

For the implementation of molecular dynamics, we must have mathemat-


ical expressions available that provide models for the energy landscape of
these chemical interactions. In other words, reviewing the formula provided
in (2.54), we must know how the potential energy stored in a bond changes
based on the geometry or the position of the atoms. These models are referred
to as force fields or interatomic potentials. The term “force field” is often used
in the chemistry community, whereas the term “potential” is frequently used
in the physics community. Here we will use both terms, as it fits for the
corresponding force field expressions.

2.6 The Interatomic Potential or Force Field:


Introduction
Figure 2.10 depicts a fundamental simplification made in classical molecular
dynamics to replace the atom as a three-dimensional structure by a single
point with a finite mass. This simple picture also illustrates the grand chal-
lenge in developing expressions, to somehow and as accurately as possible
describe the (often complex) effect of the electrons on the atomic interac-
tions. The goal of interatomic potentials or force fields is to give numerical or
analytical expressions that estimate the energy landscape of a large particle
system. This energy landscape is therefore the fundamental input into molec-
ular simulations, in addition to structural information (position of atoms, type
of atoms, and their velocities and accelerations).
In this section, we will review a variety of classical approaches in mod-
eling atomic interactions, ranging through different levels of accuracy and
complexity.
Numerous potentials with different levels of accuracy have been proposed,
each having its disadvantages and strengths. The approaches range from accu-
rate quantum-mechanics-based treatments (e.g., first-principle density func-
tional theory methods [87] or tight-binding potentials [92]), reactive potentials
[93] to multibody potentials (e.g., embedded atom approaches as proposed
in [94]) to the most simple and computationally least expensive pair potentials
(e.g., Lennard-Jones) [9,84]. One of the first molecular dynamics studies was a
Lennard-Jones model of argon in 1964 [83]. Previous studies used hard-sphere
models to describe phase transformations [81,82]. Many potential expressions
are fit carefully so that they closely reproduce the energy landscape predicted
from quantum mechanics methods, while retaining computational efficiency.
The interatomic potential thereby allows to generate a direct link between the
empirical molecular dynamics methods and quantum chemistry.
2 Basic Atomistic Modeling 49

Fig. 2.10 Atoms are composed of electrons, protons, and neutrons. Electrons and
protons are negative and positive charges of the same magnitude. In classical molec-
ular dynamics, the three-dimensional atom structure is replaced by a single mass
point

Fig. 2.11 Overview over different simulation tools and associated lengthscale and
timescale

There is no single approach that is suitable for all materials and for all
materials phenomena. The choice of the interatomic potential depends very
strongly on both the application and the material. Popular choices in par-
ticular for modeling mechanical properties of materials are semiempirical
or empirical methods, which typically allow one to simulate large systems
with many thousands to billions of particles. However, to address different
aspects of the mechanical behavior of a specific material typically requires the
application of a range of simulation approaches.
An overview over the most prominent materials simulation techniques
is shown in Fig. 2.11. In the plot we also indicate which lengthscale and
timescale can be reached with the various methods. The methods included
in the figure refer to quantum mechanics based methods, classical molecular
50 Atomistic Modeling of Materials Failure

dynamics methods, as well as numerical continuum mechanics methods.


Quantum-mechanical-based treatments are typically limited to very short
time- and length scales, on the order of a few nanometers and picoseconds.
The assumption of empirical interactions in classical molecular dynamics
scheme significantly reduces the computational burden, and the lengthscale
and timescale that can be reached are dramatically increased, approaching
micrometers and several nanoseconds. For comparison, we include also con-
tinuum mechanics-based simulation tools that can treat virtually any length
scale, but they may lack a proper description at small scales, and they are
therefore often not suitable to describe materials failure processes in full detail
(see discussion in Sect. 1.4). Mesoscopic simulation methods such as discrete
dislocation dynamics can bridge the gap between molecular dynamics and
continuum theories by generating an intermediate scale at which clusters of
atoms or small crystals are treated as a single particle [50, 95–101].
The remainder in this section will be focused mostly on empirical potential
expressions that are suitable for the study of mechanical properties of mate-
rials. How do empirical potentials describe the various chemical interactions?
Often, energy contributions from covalent atom–atom interactions, electro-
static interactions, vdW interactions, and others are summed up individually,
so that
U = UElec + UCovalent + UvdW + UH-bonds + . . . . (2.26)
The challenge is how can these individual terms be approximated, most accu-
rately, for a specific material? In the following sections we will describe some
of the most common empirical potentials that provide such approximations.

2.6.1 Pair Potentials


We begin with the simplest atom–atom interactions for which the potential
energy only depends on the distance between two particles. The total energy
of the system is given by summing the energy of all atomic bonds over all N
particles in the system. The total energy is then given by

1  
N N
Utotal = φ(rij ), (2.27)
2 j=1
i=j=1

where rij is the distance between particles i and j. Note the factor 1/2 to
account for the double counting of atomic bonds. The procedure of summing
up the energies is shown in a schematic in Fig. 2.12.
The term φ(rij ) describes the potential energy of a bond formed between
two atoms, as a function of its distance rij . How can one obtain expressions
for φ? A possible approach is sketched in Fig. 2.13, showing how a full-electron
representation of the pair of atoms is reduced to a case where two point par-
ticles interact. The energy–distance relationship must be identical in both
cases. An approach often used is to carry out quantum mechanical calcula-
tions that provides the relationship between distance and energy of a pair of
2 Basic Atomistic Modeling 51

1
2 5

1
2 5

4
Fig. 2.12 Pair interaction approximation. The upper part shows all pair interactions
of atom 1 with its neighbors, atoms 2, 3, 4, and 5. When the bonds to atom 2 are
considered, the energy of the bond between atoms 1 and 2 is counted again (bond
marked with thicker line). This is accounted for by adding a factor 1/2 in (2.27)

atoms, which is then used to determine the parameters of the pair potential
expression. Pair potentials must capture the repulsion at short distances due
to the increasing overlap of electrons in the same orbitals, leading to high
energies due to the Pauli exclusion principle. At large distances, the poten-
tial must capture the effect that atoms attract each other to form a bond. In
many pair potentials, two separate terms are used to describe repulsion and
attraction, and the sum of these repulsive and attractive interactions yield
the total energy dependence on the radius:
φ = φRepulsion + φAttraction . (2.28)
A pair of atoms is in the equilibrium position at a balance between the
attractive and repulsive terms.
Pair potentials are the simplest choice for describing atomic interactions.
Even so, in some materials the interatomic interactions are best described
by pair potentials (because the underlying quantum mechanical governing
equations actually predict such a behavior). Prominent examples include the
noble gases (e.g., argon, neon) [83] as well as Coulomb interactions due to
partial charges. Pair potentials have also proven to be a reasonable model
for more complex materials such as SiO2 [102]. The potential energy of an
individual atom is given by
52 Atomistic Modeling of Materials Failure

Fig. 2.13 Replacing a full-electron representation of atom–atom interaction by a


potential function that only depends on the distance r between the particles


Ni
Ui = φ(rij ) (2.29)
j=1

where Ni is the number of neighbors of atom i (this expression corresponds to


the first summation in (2.27)). Usually, the number of neighbors considered
for inclusion in the potential energy calculation is limited to the second or
third nearest neighbors. Popular pair potentials for the simulation of metals
include the Morse potential [103] and the Lennard-Jones (LJ) potential, which
are described, for instance, in [9, 84, 104]. The LJ 12:6 potential is defined as
   6
12
σ σ
φ(rij ) = 40 − . (2.30)
rij rij

The LJ potential can be fitted to the elastic constants and lattice spacing
of metals (however, this model has some shortcomings with respect to the
stacking fault energy and the elasticity of metals). The term with power 12
represents atomic repulsion, and the term with power 6 represents attractive
interactions. The parameter σ scales the length and 0 the energy of atomic
bonds. Often, pair potentials are cutoff smoothly with a spline cutoff function
(see for instance [104] or [105]).
For the LJ potential, the equilibrium distance between atoms (denoted as
r0 ) is given by √
r0 = σ 6 2. (2.31)
The maximum force between two atoms is
2.3940
Fmax,LJ = . (2.32)
σ
Figure 2.14 shows a plot of the LJ potential (dotted line) and its deriva-
tive (continuous line, describing the interatomic forces), in a parametrization
2 Basic Atomistic Modeling 53

for copper as reported in [9]. It also illustrates important points in the LJ


potential, such as the equilibrium distance r0 and the point of largest force
Fmax,LJ .

Fig. 2.14 Plot of the LJ potential and its derivative (for interatomic forces) in a
parametrization for copper as reported in [9]

Another popular pair potential is the Morse potential, defined as


2
φ(rij ) = D [1 − exp (−β(rij − r0 ))] . (2.33)

A fit of this potential to different metals (as well as different forms of the


Morse potential) can be found, for instance in [106]. The parameter r0 stands
for the nearest neighbor lattice spacing, and D and β are additional fitting
parameters. The Morse potential is computationally more expensive than the
LJ potential due to the exponential term (however, this is more realistic for
many materials).
An advantage of using pair potentials is the computational efficiency.
Another important advantage is that fewer parameters are involved, which
may simplify parameter studies and the fitting process to different materi-
als. For example, the LJ potential has only two parameters, and the Morse
potential has only three parameters.
The potentials given by (2.30) and (2.33) are strongly nonlinear functions
of the radius r. In some cases it is advantageous to linearize the potentials
around the equilibrium position and define the so-called harmonic potential
(see also Fig. 2.5)

1
φ(rij ) = a0 + k(rij − r0 )2 (2.34)
2
54 Atomistic Modeling of Materials Failure

where k is the spring constant r0 the equilibrium spacing, and a0 is a constant


parameter. An important drawback of pair potentials is that elastic properties
of metals cannot be modeled correctly.

2.6.2 Multibody Potentials: Embedded Atom Method for Metals

The concept that the total energy of the system is simply a sum over the energy
contributions between all pairs of atoms in a system is a great simplification
that leads to great challenges. For example, at a surface of a crystal, the
atomic bonds may have different properties than in the bulk. Pair potentials
are not capable of capturing this effect. The limitation of pair potentials to
model more complex situation, in particular the dependence of the properties
of chemical bonds between pairs of atoms on the environment, is sketched
in Fig. 2.15. This behavior is particularly important for metals, because of
quantum mechanical effects that describe the influence of the electron gas.

Fig. 2.15 Difference in bond properties at a surface. Pair potentials (left panel) are
not able to distinguish bonds in different environments, as all bonds are equal. To
accurately represent the change in bond properties at a surface, one needs to adapt
a description that considers the environment of an atom to determine the bond
strength, as shown in the right panel. The bond energy between two particles is then
no longer simply a function of its distance, but instead a function of the positions
of all other particles in the vicinity (that way, changes in the bond strength, for
instance at surfaces, can be captured). Multibody potentials (e.g., EAM) provide
such a description

To accurately represent the change in bond properties at a surface, a


description is needed that considers the environment of an atom to determine
the bond strength. Therefore, the bond energy between two particles is no
2 Basic Atomistic Modeling 55

longer simply a function of its distance, but instead a function of the posi-
tions of all other particles in the vicinity. This behavior can be captured
in multibody potentials. The idea behind multibody potentials is to incor-
porate more specific information on the bonds between atoms than simply
the distance between two neighbors. In such potentials, the energy of bonds
therefore depends not only on the distance of atoms, but also on its local
environment, that is, on the positions of neighboring atoms. In the case of
metals, the interactions of atoms can be quite accurately described using
potentials based on the embedded atom method (EAM), or other so-called
n-body potentials (e.g., [94, 107]. Several variations of the classical EAM
potentials exist [108, 109]). Another, similar approach is based on effective
medium theory (EMT) [110, 111]. Particularly appropriate models have been
reported for metals such as copper and nickel. Other metals (e.g., aluminum)
are more difficult to model with such approaches [112, 113].
An EAM potential for metals is typically given in the form


Ni
Ui = φ(rij ) + f (ρi ), (2.35)
j=1

where ρi is the local electron density and f is the embedding function. The
electron density ρi depends on the local environment of the atom i, and
the embedding function f describes how the energy of an atom depends on
the local energy density. The electron density itself is typically calculated
based on a simple pair potential that maps the distance between atoms to
the corresponding contribution to the local electron density. The potential
features a contribution by a two-body term φ to capture the basic repulsion
and attraction of atoms (just like in LJ or Morse potentials), in conjunction
with a multibody term that accounts for the local electronic environment of
the atom.
Overall multibody potentials allow a much better reproduction of the
elastic properties of metals than pair potentials (e.g., [109]). For instance,
most real materials violate the Cauchy relation (that is, the condition that
c1122 = c1212 ). Any pair potential predicts an agreement with the Cauchy rela-
tion [109]. Multibody potentials are capable of reproducing the appropriate
elastic behavior. Figure 2.16 illustrates how an EAM-type multibody potential
can represent different effective pair interactions between bonds at a surface
and in the bulk.
However, most conventional multibody potentials are not capable of mod-
eling any effect of directional bonding. Whereas this is not important for
metals such as Ni or Cu, this becomes quite significant in materials with a
more covalent character of the interatomic bonding. To address these effects,
modified embedded atom potentials (MEAM) have been proposed that can
be parameterized, for instance, for silicon [114].
56 Atomistic Modeling of Materials Failure

Fig. 2.16 This plot illustrates how an EAM-type multibody potential can represent
different effective pair interactions between bonds at a surface and in the bulk

2.6.3 Force Fields for Biological Materials and Polymers

The bases for simulations of polymers, organic substances, or proteins are force
fields that describe the various chemical interactions based on a combination
of energy terms. This is required since in these materials, the set of char-
acteristic chemical bonds is much more diverse than in metals, for instance,
which requires the explicit consideration of ionic, covalent, and vdW interac-
tions. Figure 2.17 illustrates this chemical complexity, exemplified in a small
alpha-helical coiled coil protein domain.
A prominent example for this approach is the classical force field CHARMM
[115]. The CHARMM force field is widely used in the protein and biophysics
community, and provides a reasonable description of the behavior of proteins.
This force field is based on harmonic and anharmonic terms describing
covalent interactions, in addition to long-range contributions describing vdW
interactions, ionic (Coulomb) interactions, as well as hydrogen bonding. Since
the bonds between atoms are modeled by harmonic springs or its variations,
bonds (other than H-bonds) between atoms cannot be broken, and new bonds
cannot be formed. Also, the charges are fixed and cannot change, and the
equilibrium angles do not change depending on stretch. The CHARMM force
field belongs to a class of models with similar descriptions of the interatomic
forces; other models include the DREIDING force field [116], the UFF force
field [117], the GROMOS force field, or the AMBER model (see [118], for
instance, for a review of various force fields for biological systems).
In the CHARMM model, the mathematical formulation for the empirical
energy function that contains terms for both internal and external interactions
2 Basic Atomistic Modeling 57

Fig. 2.17 Chemical complexity in proteins involves a variety of chemical elements


and different chemical bonds between them. The snapshot shows a small alpha-
helical coiled coil protein domain

Fig. 2.18 Schematic of the contributions of the different terms in the potential
expressions given in (2.36), illustrating the contributions of bond stretching, angle
bending, bond rotations, electrostatic interactions, and vdW interactions
58 Atomistic Modeling of Materials Failure

has the form:


Usystem = Ubond + Uangle + Utorsion + UCoulomb + UvdW + . . . , (2.36)
representing an approach to split up different energy contributions. However,
most of the terms, in particular those describing the bond interactions, are
harmonic expressions.
An expression similar to (2.34) describes the energy contributions to the
bond stretching term Ubond (the total energy due to bond stretching is
obtained by summing over all pairs of atoms). For bond bending between
three atoms i, j, and k,
1
φbend = b0 + kbend (θijk − θ0 )2 (2.37)
2
where θ0 is the equilibrium bond angle (depends on the particular triplet of
atoms considered), and θ is the angle between the three atoms i, j, and k.
For torsional energies between a group of four atoms,
1
φtorsion = t0 + ktorsion (1 − cos(θ1 )) (2.38)
2
where θ1 is the torsional angle and ktorsion is the appropriate spring constant
that describes the magnitude of the resistance to torsional deformation of a
group of atoms.
The contributions from all pairs of atoms (for bond stretching), all triplets
of atoms (for bond bending), and quadruples of atoms (for bond rotation) are
summed up to yield the total potential energy of the system:

Ubond = φbond , (2.39)
pairs

Ubend = φbend , (2.40)
triplets

and 
Utorsion = φtorsion . (2.41)
quadruples

Due to the harmonic approximation, these expressions are only valid for
small deformation from the equilibrium configuration of the bond. Large
deformation or fracture of these bonds cannot be described. Figure 2.18
schematically illustrates the energy contributions provided in (2.36).
The Coulomb energies are evaluated between pairs of atoms and are
described as
qi qj
φCoulomb = (2.42)
1 rij
where qi and qj are the partial charges of atoms i and j, and 1 is the effective
dielectric constant (1 = 1.602 × 10−19 C for vacuum). The total Coulomb
energy is then given by
2 Basic Atomistic Modeling 59


N N
qi qj
UCoulomb = (2.43)
 r
j=1 1 ij
j=i=1

The calculation of electrostatic interactions provides a significant computa-


tional burden, as the particle interactions are long range. In particular when
using small, periodic unit cell approaches, the interactions from one cell side
to another must be considered very carefully. Several methods have been
developed to address this problem. The most prominent technique is the
particle-mesh Ewald method (PME), which significantly reduces the computa-
tional effort in accurately calculating these interactions. Most modern molec-
ular dynamics codes have implementation of the PME technique or related
approaches. The vdW terms are typically modeled using Lennard-Jones 6–12
terms. Both the vdW terms and the Coulomb terms contribute to the exter-
nal or nonbonded interactions. H-bonds are often included in the vdW terms.
Some flavors of CHARMM-type potential provide explicit expressions for H-
bonds that involve angular terms to provide a more refined description of the
spatial orientation between H-bond acceptor and donor pairs.
The parameters in such force fields are often determined from quantum
chemical simulation models by using the concept of force field training. Which
specific terms of the force field formulation are considered for a particular
chemical bond are controlled by atom type names. That is, each atom type is
not only specified by its element name but also by a tag that denotes which
type of chemical bond is attached to it. For instance, the tag CA refers to an
aromatic carbon atom, CC to a carbonyl carbon atom, and C to a polar car-
bon atom (e.g., in a protein backbone). Thus, a critical step before a molecular
simulation with a CHARMM-type force field can be carried out is the assign-
ment of these atom types. The information of element types as provided in
the Protein Data Bank, for instance, is insufficient. Automated programs have
been developed to carry out this typing tasks by comparing particular molec-
ular structures with templates in a large database, and assigning appropriate
atom types according to a best fit comparison.
Force fields for protein structures typically also include simulation models
to describe water molecules (e.g., TIP3P, TIP4P, SPC and SPC/E, ST2),
an essential part of any simulation of protein structures. These water force
fields are composed of similar harmonic, bond angle, dispersive, and Coulomb
expressions.
Table 2.2 summarizes several popular choices for organic force fields.

2.6.4 Bond Order and Reactive Potentials

An important class of multibody potentials is based on the concept of bond


orders, a model particularly suitable to describe the forces in covalently
bonded materials (that is, chemical bonds that have a strong directional
dependence). This is important, for instance, in carbon-based materials, such
as carbon nanotubes. The key concept is that the bond strength between two
60 Atomistic Modeling of Materials Failure

Force field name Primary application Notes


CHARMM Proteins
AMBER Proteins
CVFF Small molecules
CFF Organic substances
GROMOS Proteins, organic molecules Implemented in
GROMACS code
MM3 Organic molecules
UFF Organic molecules Universal force field
DREIDING Organic molecules
TIP3P, TIP4P Water molecules
SPC, SPC/E Water molecules

Table 2.2 Overview over force fields suitable for organic substances

atoms is not constant, but depends on the local environment, similar to the
EAM approach (please see [119] for a discussion). However, specific terms
are included to specify the directional dependence of the bonding. The idea
is based on the concept of mapping bond distances to bond orders, which
enables one to determine specific quantum chemical states of a molecular
structure. The concept of bond order was initially introduced by Pauling
[120]. Very well-known models in this class of potentials are the reactive
bond order potentials (REBO) [121, 122], the Tersoff potential [10, 123], the
Stillinger–Weber potential [124], Brenner’s force fields [125, 126], the Stu-
art reactive potential [127], and more recently, the ReaxFF reactive force
field [93].

Abell–Tersoff Approach

The basic concept of bond order potentials is simple to explain. The key idea
is to modulate the bond strength based on the atomic environment, taking
advantage of some theoretical chemistry principles.
Instead of expressing φ(rij ) as a harmonic function or an LJ function (see
above), in the Abell–Tersoff approach the interaction between two atoms is
expressed as

φ(rij ) = φRepulsion (rij ) − Mij φAttraction (rij ), (2.44)

where φRepulsion (rij ) is a repulsive term and φAttraction (rij ) is an attractive


term. The parameter Mij that multiplies the attractive interactions represents
a many-body interaction parameter. This parameter describes how strong the
attraction is for a particular bond, from atom i to atom j. Most importantly,
the parameter Mij can range from zero to one, and describes how strong this
particular bond is, depending on the particular chemical environment of atom
i. It can thus be considered a normalized bond order, following the concept of
2 Basic Atomistic Modeling 61

Fig. 2.19 The plot shows the cohesive energy per atom (upper plot, in eV) and
the bond length (lower plot, in Å), for several real and hypothetical polytypes of
carbon, comparing the predictions from the Tersoff potential [10] for C with experi-
mental and other computational results. The structures include a C2 dimer molecule,
graphite, diamond, simple cubic, BCC, and FCC structures. The squares correspond
to experimental values for these phases and calculations for hypothetical phases [11].
The circles are the results of Tersoff’s model [10]. The continuous lines are spline
fits to guide the eye. Reprinted from: J. Tersoff, Empirical interatomic potentials
for carbon, with applications to amorphous carbon, Physical Review Letters, Vol.
61(25), 1988, pp. 2879–2883. Copyright  c 1988 by the American Physical Society

the Pauling relationship between bond length and bond order. Abell suggested
that
Mij ∼ Z −δ (2.45)
where δ depends on the particular system and Z is the coordination number of
atom i that depends on the bond radius. For pair potentials, Mij ∼ Z, which
is not true for many real materials. The Abell–Tersoff approach provides a
realistic model for these effects.
In the Tersoff potential [10, 123], the many-body term depends explicitly
on the angles of triplets of atoms, in addition to considering the effect of
coordination. Thus, this and related potentials are also referred to as three-
body potentials. The explicit angular dependence also illustrates the difference
to EAM potentials: Here, the multibody term solely depends on the elec-
tron density, and not on any directional information. Tersoff has successfully
parametrized his potential approach for carbon, silicon, and other semicon-
ductors. Figure 2.19 shows the cohesive energy per atom for several real and
62 Atomistic Modeling of Materials Failure

hypothetical polytypes of carbon, comparing the prediction from experiment


with the results obtained from the potential.
These equations immediately lead to a relationship between bond length,
binding energy, and coordination, through the parameter Mij . The modula-
tion of the bond strength effectively leads to a change in spring constant as a
function of bond environment,

k(r) ∼ k0 Mij (Z, δ). (2.46)

Note that k0 is a reference spring constant, which is modulated by the


atomic environment that is essentially dependent on the bond radius. This
method has been very successful to describe the interatomic bonding in several
covalently bonded materials, for example the C–C bonds in diamond, graphite,
and even hydrocarbon molecules [125, 126]. The coordination number is a
concept also used in lattice systems, for example crystals. In organic molecules,
the coordination number can be thought of as the amount of covalent bonds
that an atom has made.

Reactive Force Fields

Many attempts have failed to accurately describe the transition energies


during chemical reactions using more empirical descriptions than relying on
purely quantum mechanical (QM) methods.
Reactive force fields represent a strategy to overcome some of the limita-
tions classical force fields, in particular the fact that these descriptions are not
able to describe chemical reactions. In fact, the behavior of chemical bonds at
large stretch has major implications on the mechanical response, as it trans-
lates into the properties of molecules at large-strain, a phenomenon referred
to nonlinear elasticity or hyperelasticity.

Fig. 2.20 An example to demonstrate the basic concept of the ReaxFF potential.
It has been developed to accurately describe transition states in addition to ground
states
2 Basic Atomistic Modeling 63

Reactive potentials are based on a more sophisticated formulation than


most nonreactive potentials. A bond length to bond order relationship is used
to obtain smooth transition between different bond types, including single
bonds, double bonds and triple bonds. Typically, all connectivity-dependent
interactions (that means, valence and torsion angles) are formulated to be
bond-order dependent. This ensures that their energy contributions disap-
pear upon bond dissociation so that no energy discontinuities appear during
reactions (see Fig. 2.20). The reactive potential also features nonbonded
interactions (shielded van der Waals and shielded Coulomb).
Several flavors of reactive potentials have been proposed in recent years.
Reactive potentials can overcome the limitations of empirical force fields and
enable large-scale simulations of thousands of atoms with quantum mechanics
accuracy. The reactive potentials, originally only developed for hydrocarbons,
have been extended recently to cover a wide range of materials, including
metals, semiconductors, and organic chemistry in biological systems such
as proteins. Here we review in particular the ReaxFF formulation [93, 128–
131, 131]. The most important features of the class of ReaxFF reactive force
fields are
• A bond length to bond order relationship is used to obtain a smooth
transition of the energy from a nonbonded to single, double, and triple
bonded molecules.
• All connectivity-dependent interactions (that is, valence and torsion
angles) are made bond-order dependent: Ensures that their energy con-
tributions disappear upon bond dissociation.
• Features shielded nonbonded interactions that include van der Waals and
Coulomb interactions, without discrete cutoff radius to ensure a continuous
energy landscape.
• ReaxFF uses a geometry-dependent charge calculation scheme (similar to
the Charge Equilibration method, QEq [132]) that accounts for polariza-
tion effects and redistribution of partial atomic charges as the molecule
or cluster of atoms changes its shape (e.g., determine the partial atom
charges qi and qj ).
• Most parameters in the formulation have physical meaning, such as
corresponding distances for bond order transitions, atomic charges and
others.
• All interactions in ReaxFF feature a finite cutoff of 10 Å.
The total energy of a system in the ReaxFF model is expressed as the sum
of different contributions that account for specific chemical interactions.
Usystem = Ubond + Uunder + Uover + Uangle + Utors +
Uconj + UH-bonds + UCoulomb + UvdW (2.47)
The term Ubond describes the energy contributions due to covalent bonds.
The terms Eunder and Uover describe energy penalties for under- and over-
coordination. Angular effects are included in Uangle , and contributions due to
64 Atomistic Modeling of Materials Failure

torsion are included in Utors . The term Uconj describes energetic contributions
of resonance effects. A maximum contribution of the conjugation energy is
obtained when successive bonds have bond order values of 1.5, as it is the
case in benzene, for instance. H-bonds are treated in the term UH-bonds , and
its interactions are calculated between groups X–H and Y, where X and Y
are atoms that can form H-bonds (for instance, N, O). Nonbonded two-body
interactions are included in UvdW and in UCoulomb . They are included for all
atom pairs whether they are bonded or nonbonded. This is important to avoid
energy discontinuities when chemical reactions occur. To enable the calcula-
tion for these interactions for all atom pairs, a shielded Coulomb potential of
the form
Ni Ni
qi qj
φi = 3 + γ −3 )1/3
(2.48)
i=1 j=1
 1 (rij ij

is used (the parameter γij is a force field parameter that is adapted to


reproduce the orbital overlap contributions).
We explain the approach used in ReaxFF for the example of the term
Ubond . First the bond order is calculated, according to
        bππ
b b
 rij σ rij π rij
BOij = exp aσ + exp aπ + exp aππ .
r0 r0 r0
(2.49)
The terms ai and bi are fitting parameters that describe the dependence of
the bond order on the bond geometry. The numerical values are adapted for
each bond type. The term rij is the distance between atoms i and j. This
equation yields the graphs shown in Fig. 2.21. Corrected bond orders BOij

are calculated from BOij via correction functions to account for the effect
of over- and under-coordination (the correction functions are a function of
the degree of deviation of the sum of the uncorrected bond orders around an
atomic center from its valency Vali ),
n
bonds
 
∆i = BOij − Vali . (2.50)
j=1

This correction refers to the fact that carbon, for instance, cannot have more
than four bonds, or hydrogen cannot have more than one bond. This expres-
sion illustrates that the bonding term is a multibody expression, as it depends
on all j neighbors of an atom i.
Once the bond orders are known, the energy contributions can be calcu-
lated. For the term considered here,


φbond, ij = −De BOij exp pbe,1 1 − BOpbe,1 ij , (2.51)

where De and pbe,i are additional bond parameters. The total bond energy is
then given by a summation over all bonds in the system,
2 Basic Atomistic Modeling 65

Fig. 2.21 Illustration of basic concept of bond order potentials. Subplot (a) shows
how the bond order potential allows for a more general description of chemistry, since
all energy terms are expressed dependent on bond order. In contrast, conventional
potentials (such as LJ, Morse) express the energy directly as a function of the bond
distance as shown in subplot (b). Subplot (c) illustrates the concept for a C–C single,
double, and triple bond, showing how the bond distance is used to map to the bond
order, serving as the basis for all energy contributions in the potential formulation
defined in (2.47)


Ubond = φbond,ij (2.52)
all bonds ij

Equation (2.51) also illustrates that the energy contributions vanish when the
bond order goes to zero, which corresponds to a broken chemical bond.
All other terms are also expressed as a function of bond orders. For
instance, the angle contributions are given as

Uangle = f (θijk , BOij , BOjk ). (2.53)

It is noted that this illustrates a distinction to the Tersoff potential. In Ter-


soff’s approach, the angular depedence is included in the multibody term for
66 Atomistic Modeling of Materials Failure

pairs of atoms. In the ReaxFF approach, an explicit angular term is included,


similar to the approach used in CHARMM-type force fields.
A crucial aspect of the ReaxFF force field is that all parameters are derived
from fitting against quantum mechanical data (DFT level) [93]. This process
is referred to as force field training. The basic concept is to require a close-
as-possible agreement between the quantum mechanical prediction and the
force field result, for a wide range of properties. Typically these properties
include elastic properties and equation of state, surface energies, dissociation
energies and landscapes, or the interaction energies of organic compounds
with surfaces.
Due to the increased complexities of force field expressions and the charge
equilibration step that is carried out at each force calculation, reactive force
fields are between 50 and 100 times more expensive than nonreactive force
fields, yet several orders of magnitude faster than DFT-level calculations that
would be able to describe bond rupture as well.

Fig. 2.22 Results of a ReaxFF study of water formation, comparing the production
rate with and without a Pt catalyst. The presence of the Pt catalyst significantly
increases the water production rate (results taken from [12])

An example simulation is shown in Fig. 2.22 [12]. This plot depicts the
results of a ReaxFF study of water formation, comparing the production rate
with and without a Pt catalyst (same pressure and temperature, the only
difference is the catalyst). It is evident that the presence of the Pt catalyst
significantly increases the water production rate.
2 Basic Atomistic Modeling 67

Fig. 2.23 Water production at varying temperature, for constant pressure. Subplot
(a) depicts the water production rate. Subplot (b) shows an Arrhenius analysis,
enabling us to extract the activation barrier for the elementary chemical process of
12 kcal/mol. This result is close to DFT level calculations of the energy barrier [12]
68 Atomistic Modeling of Materials Failure

Figure 2.23 depicts an analysis of the system with Pt catalyst, for vari-
ations of the temperature. These simulations involves thousands of reactive
atoms, a computational task that cannot be achieved using DFT or simi-
lar approaches. Thus, the ReaxFF approach provides a very useful bridge
between quantum mechanical methods and empirical potentials, as illus-
trated in Fig. 2.24. ReaxFF simulations have been reported with system sizes
approaching millions of atoms, as recently reported in the ReaxFF parallelized
algorithm [133].

Fig. 2.24 The ReaxFF force field fills a gap between quantum mechanical methods
(e.g., DFT) and empirical molecular dynamics

2.6.5 Limitations of Classical Molecular Dynamics

Atomistic or molecular simulations is a fundamental approach, since it consid-


ers the basic building blocks of materials as its smallest entity, atoms. Stress
singularities at crack tips are handled naturally, thereby avoiding many chal-
lenges associated with continuum methods. Molecular dynamics is also a quite
appropriate tool for describing material deformation under extremely high
strain rates that are not accessible by other methods (FE, DDD, and other
approaches).
However, molecular dynamics simulations feature several limitations. It is
important to remember these limitations during the course of model develop-
ment and data analysis and interpretation. For example, molecular dynamics
simulation models typically only allow one to study materials with dimensions
of several hundred nanometers or below. The time scale limitation is another
serious limit of molecular dynamics, which has prevented many researchers
from studying phenomena of interest or to make rigorous links to labora-
tory experiments that are often carried out at much different time scales.
Please see [134] for a description of these issues. Further associated aspects
and limitations will be discussed in the forthcoming sections.
To treat electrons properly one needs to incorporate quantum mechanical
methods explicitly into the molecular models, which feature degrees of freedom
that describe the structure of electrons. This is particularly significant for
2 Basic Atomistic Modeling 69

electronic properties, optical phenomena, and magnetic properties, which all


require such quantum mechanical treatments. The physical reason for this is
that these properties are derived from the electronic structure associated with
atoms, molecules, or system of atoms rather than being derived only from the
positions of atoms or the interatomic forces.
Some methods have been developed that utilize simple potential expres-
sions to describe the complex quantum mechanical effects. For instance, the
electron force field, or eFF [135], utilizes a single approximate potential
between nuclei and electrons, and correctly describes many phases relevant
for warm dense hydrogen. The eFF model thereby provides a simplified solu-
tion of the time-dependent Schroedinger equation. The potential formulation
requires only moderate computational effort, since the computational com-
plexity of the terms is similar to those used in traditional force fields that
treat atoms only as particles. Thus it may be possible to use this new force
field to simulate large excited systems that are currently beyond the reach of
quantum mechanical methods.

2.7 Numerical Implementation


Here we summarize a few important numerical and implementation aspects
of molecular dynamics simulation methods.

Fig. 2.25 Schematic of the numerical scheme in carrying out molecular dynamics
simulations

Figure 2.25 depicts the basic numerical scheme of carrying out a molecular
dynamics study. The basic steps of a molecular dynamics simulation are
• Define initial conditions and boundary conditions (including positions and
velocities at t = t0 ); typically the velocities of particles are drawn from a
Maxwell–Boltzmann distribution.
70 Atomistic Modeling of Materials Failure

• Obtain updating method, e.g., the Verlet scheme as described in (2.13),


choose time step and thermodynamical ensemble.
• Obtain an expression for forces, by defining an approximation of the
potential energy landscape U (r).
• Analyze data using statistical methods.

Fig. 2.26 Schematic of the numerical scheme in carrying out molecular dynamics
simulations

2.7.1 Periodic Boundary Conditions

Periodic boundary conditions (PBCs) is a widely used concept in molecu-


lar dynamics. Figure 2.26 depicts a schematic of implementation of periodic
boundary conditions. PBCs allows one to study bulk properties (that means,
there are no free surfaces) with small number of particles (here N = 3 for
three particles).
For a variety of thermodynamical properties it has been demonstrated that
this approach is very efficient. However, for mechanisms or phenomena that
involve inhomogeneous stress and deformation fields, the approach does not
give useful results. To represent such spatially varying fields it is vital to make
the system larger and model the entire collection of atoms that resemble the
physical space.
When periodic boundary conditions are used, all particles are “connected”
and do not sense the existence of a free surface. For numerical reasons, the
original cell is surrounded by 26 image cells (8 in 2D). Image particles in these
image cells move in exactly the same way as original particles. These image
copies are necessary to carry out force calculations in the system, as discussed
in Sect. 2.7.2.
2 Basic Atomistic Modeling 71

2.7.2 Force Calculation

We recall that at each integration step, forces are required to obtain acceler-
ations. Forces are calculated from the positions of all atoms, by considering
the atomic potential energy surface U (r) that depends on the positions of all
atoms. According to (2.9), the force vector is given by taking partial deriva-
tives of the potential energy surface with respect to the atomic coordinates of
the atom considered,
dU (r)
Fi = − . (2.54)
dri

Fig. 2.27 Schematic of force calculation scheme in molecular dynamics, for a pair
potential. To obtain the force vector F one takes projections of the magnitude of
the force vector F into the three axial directions xi (this is done for all atoms in the
system)

For the special case where the interatomic forces are described by a poten-
tial φ that depends only on the distance between pairs of atoms, here denoted
by r, the contributions to the total force vector due this particular interaction
can be obtained by taking projections of the magnitude of the force vector F
into the three axial projections ri of the vector between the pair of particles.
The magnitude of the force vector due to interactions of pairs of atoms is then
given by
dφ(r)
F = , (2.55)
dr
where F =| F |. The ith component of the force vector is then given by
ri
Fi = −F . (2.56)
r
Figure 2.27 depicts this approach schematically.
In principle, all atoms in the system interact with all other atoms, which
requires a nested loop for calculation of the force vectors of all atoms. This
renders the total computational time requirement to solve the problem second
order with respect to the number of particles N ,
72 Atomistic Modeling of Materials Failure

Fig. 2.28 Use of neighbor lists and bins to achieve linear scaling ∼N in molecular
dynamics. Panel (a): Example of how neighbor lists are used. The four neighbors
of the central atom (in the circle) are stored in a list so that force calculation can
be done directly based on this information. This changes the numerical problem to
a linear scaling effort. Panel (b): The computational domain is divided into bins
according to the physical position of atoms. Then, atomic interactions must only be
considered within the atom’s own bin and atoms in the neighboring bins

ttot ∼ N 2 . (2.57)

The first loop goes over all atoms, and the second loop goes through all
possible neighbors of each atom. The following pseudocode illustrates this
process:
for i=1 to N # loop over all atoms i
for j=1 to N (i not equal to j) # second loop over all atoms neighboring
atom i
r=distance(i,j) # calculate distance between atoms i and j
F=f(r) # calculate force depending on radius

Such second-order scaling of the force calculation time is a computational


disaster and would prevent us from solving large systems. Several strategies
are commonly used to avoid this type of scaling, as discussed in the next few
sections.

2.7.3 Neighbor Lists and Bins

Avoiding the second-order scaling of force calculation ∼N 2 is important to


develop numerically feasible strategies. A bookkeeping scheme that is often
used in molecular dynamics simulation is a neighbor list that keeps track of
2 Basic Atomistic Modeling 73

that are the nearest, second nearest, and so forth neighbors of each particle.
This method is used to save time from checking every particle in the system
every time a force calculation is made. The list can be used for several time
steps before updating. Each update is expensive since it involves N × N oper-
ations for an N -particle system. In low-temperature solids where the particles
do not move very much, it is possible to perform an entire simulation without
or with only a few updating, whereas in simulation of liquids, updating every
5 or 10 steps is quite common. Figure 2.28a shows a schematic of how neighbor
lists are used. We note that neighbor lists can only be implemented if particles
interact only up to a certain cutoff radius; for very long range interactions,
the definition of neighbor lists may not be feasible.
An alternative to generation of neighbor lists is the decomposition of
the computational domain into bins. The size of the bin is chosen compa-
rable to the cutoff radius of the potential, so that atomic interactions must
only be considered within the atom’s own bin and the neighboring bins (see
Fig. 2.28b).

2.8 Property Calculation


In this section, we summarize important methods to calculate system prop-
erties from molecular dynamics simulations. We introduce definitions and
numerical procedures for the following material properties:
• Temperature T (thermodynamic property)
• Pressure P (thermodynamic property)
• Radial distribution function g(r) (structural information)
• Mean square displacement function (measure for mobility of atoms, relates
to diffusivity)
• Velocity autocorrelation function (transport properties)
• Virial stress tensor (link to continuum Cauchy stress)

2.8.1 Temperature Calculation

Using the kinetic energy of the system K, the temperature is given by

2 K
T = . (2.58)
3 N kB
Note that the numerical value for the Boltzmann constant kB = 1.3806503 ×
10−23 m2 kgs−2 K−1 , relating energy and temperature at the level of individual
atoms or particles (its units are energy per absolute temperature). Please see
also Table 4.7 for other units and their conversion to SI units.
74 Atomistic Modeling of Materials Failure

2.8.2 Pressure Calculation

The pressure P is given by


 
11  
N N

P = N kB T − rij . (2.59)
3 V i=1 j=1,j<i drij

The first term stems from the kinetic contributions of particles hitting the
wall of the container, and the second term stems from the interatomic forces,
expressed as the force vector contribution multiplied by the distance vector
component between particles i and j.

Ω(r + 2r ) considered volume

r+ r r+ 2
r
2

Fig. 2.29 Method to calculate the radial distribution function g(r)

2.8.3 Radial Distribution Function

The radial distribution function (RDF) g(r) is defined as


ρ(r)
g(r) = , (2.60)
ρ0
where ρ(r) is the local density at r, and ρ0 = N/V is the averaged atomic
volume density. A possible numerical strategy to calculate g(r) is depicted in
Fig. 2.29. For such a discrete estimate,

N (r ± ∆r
2 )
g(r) = , (2.61)
Ω(r ± 2 )ρ0
∆r

where N (r ± ∆r
2 ) is the number of particles in the volume shell with volume
Ω(r ± ∆r
2 ).
2 Basic Atomistic Modeling 75

Fig. 2.30 Radial distribution function g(r) for various atomistic configurations,
including a solid (crystal), a liquid and a gas

An example is shown in Fig. 2.30. The RDF enables to perform quantita-


tive analysis of atomic structure and can elucidate if a system is in solid, liquid,
or gas state. Moreover, the RDF provides detailed information about the
type of crystal structure or includes precise signatures of the specific atomic
arrangement of a liquid. It also provides a means to compare quantitatively
with experimental results (e.g., obtained using neutron-scattering techniques)
and can thereby assist in developing new potential formulations.

2.8.4 Mean Square Displacement Function

The mean square displacement function is defined as

  1 
N
∆r2 = (ri (t) − ri (t = 0))2 . (2.62)
N i=1

If averaged over all particles, the mean square displacement function pro-
vides the square distance that particles have moved during time t. The mean
square displacement function is zero at t = 0, and it grows with increasing
time. In a solid, it is expected that the mean square displacement function
grows to a characteristic value and then saturates at a constant value. In
a liquid, all atoms move continuously through the material, as in Brownian
motion. Diffusivity in liquids is related to the linear variation of the mean
square displacement function with time t. The diffusivity D can be calculated
as
1 d  2
D= lim ∆r , (2.63)
2d t→∞ dt
where d = 2 in 2D and d = 3 in 3D (the parameter d describes the
dimensionality).
76 Atomistic Modeling of Materials Failure

2.8.5 Velocity Autocorrelation Function

The velocity autocorrelation function (VAF) is defined as

1  1 
N N
v(0)v(t) = vi (t0 )vi (t0 + t), (2.64)
N i=1 N j=1

where vi refers to the magnitude of the velocity vector of particle i.


The velocity autocorrelation function gives information about the atomic
motions of particles in the system. Since it is a correlation of the particle
velocity at one time with the velocity of the same particle at another time,
the information refers to how a particle moves in the system, such as during
diffusion. The diffusion coefficient can be calculated as
 
1 t =∞
D= v(0)v(t) dt (2.65)
3 t =0

Autocorrelation functions can be used to calculate other transport coeffi-


cients such as thermal conductivity or the shear viscosity. These expressions
derived based on the Green–Kubo relations can provide links between correla-
tion functions and material transport coefficients (also applicable to thermal
conductivity, shear viscosity, and other material transport properties).
In addition to the diffusivity D, the VAF also provides information about
structural properties of an atomic systems. For liquids or gases, under rel-
atively weak molecular interactions, the magnitude of the VAF reduces
gradually under the influence of weak forces. This is because the velocity
decorrelates rapidly with time. This is the same concept as stating the atom
loses its memory about what its initial velocity was. For example, in the case
of a gas, the VAF plot is a simple exponential decay, revealing the presence
of weak forces slowly destroying the velocity correlation.
In a solid, under relatively strong interactions, the atomic motion is an
oscillation, vibrating backwards and forwards, reversing their velocity at the
end of each oscillation. Therefore, the VAF corresponds to a function that
oscillates strongly from positive to negative values and back again, while the
magnitude of the oscillations decay in time. This leads to a function resembling
a damped harmonic motion.
Figure 2.31 plots the VAF for different cases.

2.8.6 Virial Stress Tensor

The challenge of defining an atomistic-based stress measure is how to relate


the continuum stress with atomistic stress.
While continuum properties are valid at any random material point, this
is not true for atomistic quantities due to the discrete nature of atomic
microstructure. Figure 2.32 depicts a schematic.
2 Basic Atomistic Modeling 77

VAF
Ideal gas
1

Dense gas

Liquid
T
Solid

Fig. 2.31 Velocity autocorrelation function (VAF) for a gas, liquid, and solid

Continuous fields ui(x) Discrete fields ui(x)

Fig. 2.32 Relating the continuum stress with the atomistic stress. The left shows
a continuum system in which σij (r) is defined at any point r. In contrast, in the
atomistic system the stress tensor is only defined at discrete points where atoms are
located

For a pair potential, the virial stress is defined as


⎛  ⎞
1  1  ∂φ(r) ri
σij = ⎝− mα vα,i vα,j + rj | ⎠ , (2.66)
V α
2 ∂r r r=rαβ
α,β,α=β

where ri is the projection of the interatomic distance vector r along coordi-


nate i, and the indices α and β refer to the atom numbers considered in the
calculation. The first term mα vα,i vα,j is the kinetic contribution, and the sec-
ond term is the force contribution. We only consider the force part, excluding
the part containing the effect of the velocity of atoms (the kinetic part), a
measure referred to as the Zhou’s virial stress. It was recently shown that the
stress including the kinetic contribution is not equivalent to the mechanical
Cauchy stress. Then the stress tensor is defined as
78 Atomistic Modeling of Materials Failure
 
11  ∂φ(r) ri
σij = rj | . (2.67)
V 2 ∂r r r=rαβ
α,β,α=β

Note that the pressure P as discussed in Sect. 2.8.2 is a special case of the
stress tensor definition. The virial stress needs to be averaged over space and
time to converge to the Cauchy stress tensor. For further discussion on the
virial stress and other definitions of the Cauchy stress tensor (e.g., the Hardy
stress) we refer the reader to a review article [136].

F1
1 2 3
F2

r21 r23
Fig. 2.33 Example of how to calculate the stress tensor in a 1D system

Figure 2.33 depicts a schematic that shows how the stress tensor can be
calculated in a 1D system, for a nearest neighbor pair potential. Neglecting
the kinetic contribution (v ≈ 0) and with

dφ(r) ri
F =− (2.68)
dr r
as the force between two particles, the stress tensor coefficient σ11 is
11
σ11 = (F1 r21 + F23 r23 ) . (2.69)
V 2

2.9 Large-Scale Computing


Molecular dynamics of mechanics applications can be computationally chal-
lenging, due to
• Complexities of force field expressions (calculation of atomic forces)
• Large number of atoms, leading to huge computational requirements
• Limitations in accessible time scales; due to intrinsic limitations of molec-
ular dynamics, which leads to the requirement to simulate a large number
of molecular dynamics steps
To model macroscopic dimensions of materials with microstructural features,
one needs to consider system sizes with approximately 1023 atoms (corre-
sponding to 1 mole). This computational burden cannot be solved by any
computational equipment as of today. In addition to the shear computational
task, if one were able to carry out such simulations it would also result in
2 Basic Atomistic Modeling 79

huge challenges for data analysis and visualization, or just for data handling
and storage. However, for many properties of materials it is not necessary to
consider 1023 atoms. Many thermodynamical properties of solids or liquids
can be accurately described in systems of thousands of atoms or less, in some
instances. Many mechanical properties require larger system sizes, containing
at least tens of thousands to billions of atoms.

2.9.1 Historical Development of Computing Power

The increase in computational power has largely contributed to the success of


molecular dynamics as a widely used tool in the analysis of small structures
or materials in extreme conditions. Further increase in computational power
will without doubt increase the opportunities of using molecular simulation
for many new applications.
Computational power has increased significantly over the past decades,
now reaching peak performances of several PFLOP. Table 2.3 summarizes the
various performance measures used to characterize computational power, also
indicating when the particular performance measure was reached.

Performance Floating point operations per second (FLOPS)


MFLOPS 106 , Million
GFLOPS 109 , Billion (reached in 1990s)
TFLOPS 1012 , Trillion (reached around 2000)
PFLOPS 1015 , Quadrillion (estimated to be reached in 2010)
Table 2.3 Explanation of the different measures of computing power, as well as
a description when it became available. In comparison a state-of-the art personal
computer (PC, laptop) provides a performance of approximately 30 GFLOPS

The need for military applications has strongly driven the development of
supercomputers. (This includes computers operated by the U.S. Department
of Energy, DoE, or the U.S. National Science Foundation, which are among the
most powerful supercomputers of the world. DoE computers are for instance
being used to maintain the U.S. stockpile of nuclear weapons.) Many other
large computational centers have been established in recent years in Europe
and Asia, including Japan’s Earth Simulator (which, for some time, was the
most powerful computer on Earth).
Early supercomputers, such as the Almaden Spark that was used in the
1960s were capable of simulating 100s of atoms. GFLOPS computing enabled
the simulation of millions of particles in the mid 1990s. TFLOPS systems led
to first simulations of systems that exceed one billion. The current state-of-the
art in atomistic simulation reaches system sizes of tens to hundreds of bil-
lions of atoms, corresponding to several micrometer-sized physical dimensions
80 Atomistic Modeling of Materials Failure

[133, 137–141] (see Fig. 2.34). However, only few computational groups or
scientists routinely carry out such large simulations.

Fig. 2.34 Increase in computer power over the last decades and possible system sizes
for classical molecular dynamics modeling. The availability of PFLOPS computers
is expected by the end of the current decade, which should enable simulations with
hundreds of billions of atoms

The organization top500.org publishes lists of the fastest supercomput-


ers in the world twice a year, called “TOP500.” It can be accessed at
http://www.top500.org. Figure 2.35 displays the top 10 of the TOP500 list
as of Spring 2008.

2.9.2 Parallel Computing

On the basis of the concept of concurrent computing, modern parallel com-


puters are made out of hundreds or thousands of small computers (for
example personal computers) working simultaneously on different parts of
the same problem. Information between these small computers is shared by
communicating, which is achieved by message-passing procedures (MPI) [142].
Parallel molecular dynamics is relatively straight-forward to implement
with great efficiency in a message-passing environment. It is important to have
an effective algorithm for handling the summations of N interacting particles.
If summations had to be carried out for each particle over all particles, the
problem would scale with N 2 . This is a computational catastrophe for large
systems! However, if the interactions between particles are short ranged, the
problem can be reduced so that the execution time scales linearly with the
number of particles (that is, execution time scales with N ). The computational
space is divided up into cells such that in searching for neighbors interacting
2 Basic Atomistic Modeling 81

Fig. 2.35 Summary of top 10 of the TOP500 supercomputer list, as of Spring 2008

with a given particle, only the cell in which it is located and the next-nearest
neighbors have to considered. Since placing the particles in the correct cells
scales linearly with N , the problem originally scaling with N 2 can therefore be
reduced to N . With a parallel computer whose number of processors increases
with the number of cells (the number of particles per cell does not change),
the computational burden remains constant.
The speedup factor S is defined as the ratio of execution time on one
processor (Ts ) over the execution time on p processors (Tp ):

Ts
S= . (2.70)
Tp

The perfectly efficient parallel computer would exhibit linear speedup. This
would mean that the computation time for p processors is 1/p times the
execution time on one processor. However, the speedup depends strongly on
the fraction of the work done in parallel. We refer the reader to Plimpton’s
algorithms for molecular dynamics with short-range forces [143].
Figure 2.36 depicts a schematic and scaling results of a modern paralleliza-
tion scheme, referred to as the tunable hierarchical cellular decomposition
scheme (THCD) (see Fig. 2.36a) [133]. The THCD involves a more com-
plex domain decomposition that includes a hierarchy of parameterized cell
data/computation structures, and adaptive load balancing through wavelet-
based computational-space decomposition. The THCD was applied to paral-
lelize the ReaxFF reactive force field approach (referred to as F-ReaxFF). This
parallelization scheme enabled the simulation of 0.56 billion-atom F-ReaxFF
systems. Figure 2.36b depicts the total execution (circles) and communica-
tion (squares) times per molecular dynamics time step as a function of the
number of processors, illustrating that the parallel speedup is almost perfect
82 Atomistic Modeling of Materials Failure

Fig. 2.36 Modern parallelization scheme. Subplot (a) depicts the schematic of the
tunable hierarchical cellular decomposition scheme (THCD). The physical volume
is subdivided into process groups, PGγ , each of which is spatially decomposed into
processes, Pγπ . Each process consists of a number of computational cells (e.g., linked-
list cells in molecular dynamics). Subplot (b) shows the total execution (circles) and
communication (squares) times per molecular dynamics time step as a function of the
number of processors for the F-ReaxFF molecular dynamics algorithm with scaled
workloads (in a 36,288P atom RDX systems on P processors (P = 1, . . . , 1920)
of Columbia [Columbia is a supercomputer at NASA]). Reprinted from Computa-
tional Materials Science, Vol 38(4), A. Nakano, R. Kalia, K. Nomura, A. Sharma, P.
Vashishta, F. Shimojo, A. van Duin, W.A. Goddard III, R. Biswas and D. Srivastava,
A divide-and-conquer/cellular-decomposition framework for million-to-billion atom
simulations of chemical reactions, pp. 642–652, copyright  c 2007, with permission
from Elsevier

(perfect parallel speedup would correspond to a constant execution time for


increasing number of processors).

2.9.3 Discussion

We emphasize, however, that the “size” of the simulations does not determine
how “useful” a simulation is by itself. Instead, the most important issue and
measure for a successful simulation is always the physics that can be extracted
from the simulation. This objective should dictate the system size. In many
cases, such as for dislocation–dislocation interaction, system sizes on the order
of micrometers are needed (dislocation interaction is associated with a char-
acteristic length scale of micrometers). This example illustrates that there is
still a need for the development of simulation techniques and more computer
power.
Future development using cheap off-the-shelve technology based on LINUX
clusters to build supercomputers (instead of using very expensive UNIX-based
supercomputers) is promising, as indicated by recent publications [68,69] and
the analysis of data in the TOP500 list.
2 Basic Atomistic Modeling 83

2.10 Visualization and Analysis Methods


Large-scale atomistic computer simulations can produce terabytes of data,
since the location, velocity, energy, and stress tensor of each atom need
to be stored (an molecular dynamics data set for a billion atoms occupies
around 100 GB of disk space). To interpret and understand the simulation
results, it is essential to have analysis tools available which are capable of
filtering out the useful information. Data processing and visualization is an
important step in the analysis to extract useful information from the simula-
tion. For example, the collective motion and interaction of defects determine
macroscopic properties such as the toughness of a material. Techniques to
extract such information from positions of atoms are critical and yet to be
developed in most cases to achieve such goals. Some of the techniques of post-
processing data and visualization techniques will be discussed in the following
paragraphs.
A long-standing dream of computer simulation scientists is three-dimen-
sional virtual reality to analyze the results. Imagine walking through the data,
as a viewer of atomic-scale size [139,144]! Scientists would then be able to iden-
tify interesting points and study these closer as the simulation proceeds. At
the “Collaboratory for Advanced Computing and Simulations (CACS)” at the
University of Southern California, an “Atomsviewer System” has been devel-
oped which visualizes large-scale data sets from huge computer simulations
[139, 144] (see also http://cacs.usc.edu/). The main feature of this system is
the ability to view materials processes simultaneously from different perspec-
tives. Figure 2.37 shows an example view of the tiled screen view of this system.
To visualize this huge amount of data, new techniques are being developed.
One approach is to process only the data the viewer at the current perspective
will actually see [139, 144]. This is achieved using the octree data structure.
The main idea is that although the data set may be very large, the viewer only
sees a very small portion of the data at any instant in time. The octree data
structure is a data management method to extract the data in the viewer’s
field of view [144]. This method is relatively coarse, and it has been shown
that 60% of all atoms are still invisible since they are hidden behind other
particles. Additional techniques such as a probabilistic approach referred to as
occlusion culling remove hidden atoms and thus further reduce the workload
for visualization. The whole visualization process is set up on a PC cluster
through parallel and distributed computing.
Another important aspect of scientific visualization is the generation of
movies. In recent years, it has been increasingly appreciated that a pic-
ture is worth a thousand words, and a movie is worth a thousand pictures.
Animations of simulations help to guide the eye to discover new scientific
phenomena. Historically, this has been particularly taken advantage of in
the biophysics community, where visualization of complex biostructures (e.g.,
proteins) is key in the understanding and interpreting simulation results. Inter-
estingly, some researchers have started to implement techniques which allow
84 Atomistic Modeling of Materials Failure

Fig. 2.37 Rendering of a large molecular dynamics simulation on a tiled display at


USC, showing hypervelocity impact damage of a ceramic plate with impact velocity
15 kms−1 , where one quarter of the system is cut to show the internal pressure
distribution (the projectile is shown in white). This figure illustrates how novel
visualization schemes provide analysis methods for ultra large-scale simulations.
Reprinted from Computational Materials Science, Vol 38(4), A. Nakano, R. Kalia, K.
Nomura, A. Sharma, P. Vashishta, F. Shimojo, A. van Duin, W.A. Goddard III, R.
Biswas and D. Srivastava, A divide-and-conquer/cellular-decomposition framework
for million-to-billion atom simulations of chemical reactions, pp. 642–652, copyright
c 2007, with permission from Elsevier

real-time interaction of users with particles in the simulation. For example, the
biophysics group around Klaus Schulten has set up a system where scientists
can interact with the simulation by using a tool to manipulate molecules [145].
The researchers implemented a system called Interactive Molecular Dynamics
(IMD). This system allows manipulation of molecules in molecular dynamics
simulations with real-time force feedback, as well as graphical display. Com-
munication is achieved through an efficient socket connection between the
visualization program (VMD) and a molecular dynamics program (NAMD)
running on single or multiple machines. In this method, a natural force
feedback interface for molecular steering is provided by a haptic device [145].
For the analysis of atomistic simulations focused on mechanical properties,
measures like strain, stress, or potential energy of atoms are important quan-
tities that can be used to analyze atomistic data, in particular with respect
to continuum mechanics theories. However, it is often advantageous to post-
process the data and derive new quantities providing more information of the
defect structure. Here we discuss a few examples for the analysis of crystal
defects in metals that will become particularly important in the third part of
this thesis.
Richard Hamming’s statement the purpose of scientific computing is
insight, not numbers underlines the importance of processing the raw
simulation data appearing as “useless” numbers to gain understanding.
2 Basic Atomistic Modeling 85

Fig. 2.38 Analysis of a dislocation network using the energy filtering method in
nickel with 150,000,000 atoms [13, 14]. Subplot (a) shows the whole simulation cell
with two cracks at the surfaces serving as sources for dislocations, and subplot (b)
shows a zoom into a small subvolume. Partial dislocations appear as wiggly lines,
and sessile defects appear as straight lines with slightly higher potential energy

Visualization, defined as a method of computing transforming the symbolic


into the geometric, enables to observe simulations and computations. Visual-
ization is also considered an approach for seeing the unseen, which enriches
the process of scientific discovery and fosters profound and unexpected
insights. Visualization of computational results is undoubtedly becoming an
increasingly important and challenging task in supercomputing.

2.10.1 Energy Method

To visualize crystal defects, the easiest approach is to use the energy method.
This method has frequently been used to “see” into the interior of the solid
(e.g., [138, 146]). In this method, only those atoms with potential energy
greater than or equal to a critical energy φcr above the bulk energy φb are
shown. The energy method has been very effective for displaying disloca-
tions, microcracks, and other imperfections in crystal packing. This method
reduces the number of atoms being displayed by approximately two orders of
magnitude in three dimensions [138].
An example of an analysis of a dislocation network using the energy
method is shown in Fig. 2.38. Figure 2.38a shows the whole simulation cell
with two cracks at the surfaces serving as sources for dislocations, and
Figure 2.38b shows a zoom into a small subvolume of the material revealing
a complex dislocation microstructure. The data are taken from a simula-
tion of work-hardening in nickel that comprises of approximately 150,000,000
atoms [13].
Assuming a crystal defect is identified as a dislocation, it can be studied
in more detail based on a geometric analysis of the lattice close to the dislo-
cation core allowing to determine the Burgers vector and the slip plane. For
that purpose, one can rotate the atomic lattice such that one is looking onto a
86 Atomistic Modeling of Materials Failure

{111}-plane, with the horizontal (x) axis oriented into a 110 direction, and
the vertical (y) axis aligned with a 111 direction. To help visualizing disloca-
tions, stretching the atomic lattice by a factor of 5 to 10 in the 110 direction
is helpful. A systematic rotation of the atomic lattice to investigate all possible
Burgers vectors is then necessary. Instead of analyzing a part of the atomic
lattice containing many dislocations, one can choose a domain of the atomic
lattice which contains only one dislocation. This approach requires a very
detailed understanding of the lattice and dislocations [38, 60]. This method
of analysis is similar to the analysis of TEM images from “real” laboratory
experiments.

Fig. 2.39 Application of the energy method to visualize fracture surfaces in a


computational fracture experiment. Only high energy atoms are shown by filtering
them according to their potential energy. This enables an accurate determination
of the geometry of cracks, in particular of the crack tip. Typically, the analysis is
confined to a search region (shown as a dashed line) to avoid inclusion of effects of
free surfaces

Figure 2.39 depicts an application of the energy method to visualize frac-


ture surfaces in a computational fracture experiment. Only high energy atoms
are shown by filtering them according to their potential energy. Similar to that
as shown in Fig. 2.38 for ductile materials, this method enables one to carry
out an accurate determination of the geometry of defects such as cracks, in
particular of the position of the crack tip. Typically, the analysis is confined
to a search region (shown by the dashed line) to avoid inclusion of effects of
free surfaces.

2.10.2 Centrosymmetry Parameter


A more advanced analysis can be performed using the centrosymmetry tech-
nique proposed by Kelchner and coworkers [36]. This method makes use of
the fact that centrosymmetric crystals remain centrosymmetric after homo-
geneous deformation. Each atom has pairs of equal and opposite bonds with
2 Basic Atomistic Modeling 87

Fig. 2.40 The figure shows a close view on the defect structure in a simulation of
work-hardening in nickel analyzed using the centrosymmetry technique [13,14]. The
plot shows the same subvolume as in Figure 2.38b

its nearest neighbors. During deformation, bonds will change direction and/or
length, but they remain equal and opposite within the same pair. This rule
breaks down when a defect is close to an atom under consideration. The
centrosymmetry method is particularly helpful to separate different types of
defects from one another, and to display stacking faults (in contrast, using the
energy method it is difficult to observe stacking faults). The centrosymmetry
parameter for an atom is defined as [36]
 3 
6 
ci = | rk,j + rk,j+6 |2 , (2.71)
j=1 k=1

where rk,j is the kth component of the bond vector (here, k = 1, 2, 3 corre-
sponding to the directions x, y, and z) of atom i with its neighbor atom j,
and rk,j+6 is the same quantity with respect to the opposite neighbor in a
FCC crystal. We summarize the interpretation of ci in Table 2.4 (assuming
that the nearest neighbor distance does not change near a defect). For the
analysis, it is reasonable to display ranges of these parameters. The method
can also be applied at elevated temperature, which is not possible using the
energy method due to the thermal fluctuation of atoms.

Defect ci /a20 Range ∆ci /a20

Perfect lattice 0.00 ci < 0.1


Partial dislocation 0.1423 0.01 ≤ ci < 2
Stacking fault 0.4966 0.2 ≤ ci < 1
Surface atom 1.6881 ci > 1
Table 2.4 Centrosymmetry parameter ci for various types of defects, normalized by
the square of the lattice constant a20 . In the visualization scheme, we choose intervals
of ci to separate different defects from each other
88 Atomistic Modeling of Materials Failure

Fig. 2.41 Analysis of a dislocation using the slip vector approach. From the result of
the numerical analysis, direct information about the Burgers vector can be obtained.
The slip vector s is drawn at each atom as a small arrow. The Burgers vector b is
drawn at the dislocation (its actual length is exaggerated to make it better visible).
The dislocation line is approximated by discrete, straight dislocation segments. A
line element between “a” and “b” is considered

An example using this centrosymmetry technique is shown in Fig. 2.40.


This plot shows the same section as in Fig. 2.38b. Unlike in the analysis
with the energy method, stacking fault regions can be visualized with the
centrosymmetry technique.

2.10.3 Slip Vector

Although the centrosymmetry technique can distinguish well between dif-


ferent defects, it does not provide information about the Burgers vector of
dislocations. The slip vector approach was first introduced by Zimmerman
and coworkers in an application of molecular dynamics studies of nanoinden-
tation [147]. This parameter also contains information about the slip plane
and Burgers vector. The slip vector of an atom α is defined as
nα  
1 
i = −
sα xαβ
i − X αβ
i , (2.72)
ns
α=β

where ns is the number of slipped atoms, xαβ i is the vector difference of


atoms α and β at the current configuration, and Xiαβ is the vector differ-
ence of atoms α and β at the reference configuration at zero stress and no
mechanical deformation. The slip vector approach can be used for any mate-
rial microstructure, unlike the centrosymmetry parameter which can only be
used for centrosymmetric microstructures.
Figure 2.41 shows the result of a slip vector analysis of a single dislocation
in copper [39]. The slip vector s is drawn at each atom as a small arrow.
The Burgers vector b is drawn at the dislocation, where its actual length is
exaggerated to make it better visible. The dislocation line can be determined
2 Basic Atomistic Modeling 89

from an energy analysis, and the line direction of a segment between point
“a” and “b” of the dislocation line is indicated by the vector l. The Burgers
vector b is given by the slip vector s directly. The analysis reveals that the
dislocation has Burgers vector b = 16 [112]. The unit vector of line direction of
the segment is l ≈ [−0.3618 0.8148 − 0.5530]. The length of the line segment
is approximately 9 nearest neighbor distances in the [110] direction. The slip
plane normal is given by the cross product ns = l × b ∼ [111], and the
dislocation thus glides in the (111) plane.

2.10.4 Measurement of Defect Speed

Accurate determination of the propagation speed of defects is crucial in the


analysis in particular of rapid materials failure phenomena.
For instance, the crack tip velocity is an important measure in the analysis
of dynamic fracture of brittle materials. The crack tip velocity is determined
from finding the crack tip position. The geometry of the crack tip is deter-
mined by finding the surface atom at the tip of the crack. This is achieved
by considering the surface atom with the highest y position, for instance, for
the case of crack propagation in the y-direction. This search is carried out
in the interior of a search region inside the slab. This quantity is averaged
over a small time interval to eliminate very high frequency fluctuations. To
obtain the steady state velocity of the crack, the measurements of the crack
speed must be taken within a region of constant stress intensity factor (see, for
instance [114] for an additional discussion on this issue). Figure 2.39 illustrates
this approach based on the energy filtering method.

2.10.5 Visualization Methods for Biological Structures

For the visualization of organic molecules (such as proteins), specific tools


have been developed. Many visualization tools exist that are capable of dis-
playing biological protein molecules and molecular clusters. A rather versatile,
powerful and widely used visualization tool is the Visual Molecular Dynamics
(VMD) program [148]. This software enables one to render complex molecular
geometries using particular coloring schemes.
It also enables one to highlight important structural features of proteins by
using a simple graphical representation, such as alpha-helices, or the protein’s
backbone. The simple graphical representation is often referred to as cartoon
model. These visualizations are often the key to understand complex dynam-
ical processes and mechanisms in analyzing the motion of protein structures
and protein domains, and they represent a filter to make useful information
visible and accessible for interpretation. Figure 2.42 shows the visual analysis
of a simple alpha-helix protein structure, with different visualization options.
90 Atomistic Modeling of Materials Failure

Fig. 2.42 Analysis of a simple alpha-helix protein structure, with different


visualization options, plotted using VMD [148]

2.10.6 Other Methods

Other researchers have used a common neighbor analysis to analyze the


results of molecular dynamics simulations of crystalline structures [149–151].
In this method, the number of nearest neighbors is calculated, which allows
one to distinguish between different defects. Additional analysis to analyze
more complex structures such as grain boundaries is possible based on the
medium-range-order (MRO) analysis. This method is capable of determining
the local crystallinity class. The MRO analysis has been applied successfully
in the analysis of simulations of nanocrystalline materials, where an exact
characterization of the grain boundary structure is important (e.g., [152–154]).

2.11 Distinguishing Modeling and Simulation


The specific meaning of modeling versus simulation has been introduced in
Sect. 2.2. Here we briefly explain the application of these terms for the exam-
ple of a generic molecular dynamics model. Table 2.5 provides a list of tasks
associated with molecular dynamics and its classification into modeling and
simulation.

2.12 Application of Mechanical Boundary Conditions


The application of boundary conditions in atomistic and molecular systems
is essential, in particular for studies of the mechanical behavior.
2 Basic Atomistic Modeling 91

Table 2.5 Distinguishing modeling and simulation, for tasks associated with
classical molecular dynamics

Modeling Simulation
Mathematical model of physical system ×
Numerical solution of problem ×
Choice of model geometry ×
Choice of numerical integrator (e.g., Verlet) ×
Choice of interatomic potential and parameters ×
Force calculation ×
Choice of boundary conditions ×
Implementation of boundary conditions ×
Choice of system size ×
Choice of thermodynamical ensemble ×
Algorithm to specify the thermodynamic ensemble
(e.g., temperature control, pressure control) ×

Fig. 2.43 Simulation method of domain decomposition via the method of virtual
atom types. The atoms in region 2 do not move according to the physical equations
of motion, but are displaced according to a prescribed displacement history. An
initial velocity gradient as shown in the right half of the plot is used to provide
smooth initical conditions

Most straightforward are approaches to apply displacement boundary con-


ditions. In these applications, the dynamics of corresponding atoms (or groups
of atoms forming a physical domain whose displacement is prescribed) is
altered so that this group of atoms follows a prescribed motion rather than
following the dynamics according to the equations of motion.
To model weak layers or different interatomic interactions in different
regions of the simulation domain, one can assign a virtual type to particu-
lar groups of atoms. On the basis of the type definition of interacting pairs
of atoms, it is possible to calculate different interatomic interactions. For
instance, this enables one to model an atomically sharp crack tip by removing
any atomic interaction across a material plane (effectively describing this part
as a free surface). An example for this type of domain decomposition is shown
92 Atomistic Modeling of Materials Failure

in Fig. 2.43. To apply mechanical load by controlling the displacement, a lin-


ear velocity gradient is established prior to simulation to avoid shock wave
generation from the boundaries (see also Fig. 2.43). To strain the system, a
few layers of atoms at the boundary of the crystal slab are not moved accord-
ing to the natural equations of motion (e.g., by the Verlet algorithm), but are
instead displaced in each step, according to the prescribed displacement, with
the velocity that matches the initial velocity field. This procedure has been
used in several atomistic studies of dynamic fracture [28, 146, 155, 156]. It is
possible to stop the increase of loading after a prescribed loading time, from
which on the boundaries are kept fixed. In an alternative method, the system
can be strained prior to the beginning of the simulation (according to the
particular loading direction), and the outermost material layers are then kept
fixed during the simulation (that is, atoms in this group do not move at all).
The application of stress or pressure boundary conditions can be more
challenging, but it can be achieved by utilizing appropriate ensemble schemes,
as for instance the Parinello–Rahman method. This approach enables one to
prescribe a particular stress tensor to the system. By changing the prescribed
value of the stress tensor one can simulate slow loading conditions.
To apply the forces to the molecule that induce deformation, steered
molecular dynamics (SMD) has evolved into a useful tool. Steered molecu-
lar dynamics is based on the concept of adding a harmonic moving restraint
to the center of mass of a group of atoms. This leads to the addition of the
following potential to the Hamiltonian of the system:
1
U (r1 , r2 , ..., t) = k (vt − (r(t) − r0 ) · n) , (2.73)
2
where r(t) is the position of restrained atoms at time t, r0 denotes origi-
nal coordinates and v and n denote pulling velocity and pulling direction,
respectively. The net force applied on the pulled atoms is

F (r1 , r2 , ..., t) = k (vt − (r(t) − r0 ) · n) . (2.74)

By monitoring the applied force (F ) and the position of the atoms that are
pulled over the simulation time, it is possible to obtain force-vs.-displacement
data that can be used to derive the mechanical properties such as bending stiff-
ness or the Young’s modulus (or other mechanical properties). SMD studies
are typically carried out with a spring constant k = 10 kcal mol−1 Å−2 , albeit
this value can be varied depending on the particular situation considered.
Figure 2.44 depicts a schematic that illustrates how load is applied with
SMD, comparing an AFM experiment with the numerical scheme. One of
the first applications of the SMD technique was to study protein unfolding.
Fig. 2.45 shows steered molecular dynamics simulations of I27 extensibility
under constant force, illustrating the molecular geometry under different force
levels (compare with Fig. 1.16a) as well as the force–extension curve (compare
with Fig. 1.16b).
2 Basic Atomistic Modeling 93

Fig. 2.44 Schematic to illustrate the use of steered molecular dynamics to apply
mechanical load to small-scale structures (subplot (a): AFM experiment; subplot
(b) Steered Molecular Dynamics model)

2.13 Summary
We summarize the main points presented in this section. We discussed analy-
sis techniques, to extract useful information from molecular dynamics results,
including the velocity autocorrelation function, the atomic stress, and the
radial distribution function. These are useful analysis methods since they pro-
vide quantitative information about molecular structure in the simulation, for
example during phase transformations, to study how atoms diffuse, for elastic
(mechanical) properties, and others. We introduced several interatomic poten-
tials that describe the atomic interactions. The basic approach in developing
such models is to condense out electronic degrees of freedom and to model
atoms as point particles.
Properties accessible to molecular dynamics can be classified into these
broad categories [86]:
• Structural - crystal structure, g(r), defects such as vacancies and intersti-
tials, dislocations, grain boundaries, precipitates
• Thermodynamic – equation of state, heat capacities, thermal expansion,
free energies
• Mechanical – elastic constants, cohesive and shear strength, elastic and
plastic deformation, fracture toughness
• Vibrational – phonon dispersion curves, vibrational frequency spectrum,
molecular spectroscopy
• Transport – diffusion, viscous flow, thermal conduction
94 Atomistic Modeling of Materials Failure

Fig. 2.45 Steered molecular dynamics simulations of I27 extensibility under con-
stant force. Subplot (a) shows snapshots of the structure of the I27 module simulated
at a force of 50 pN (I, at 1 ns) and 150 pN (II, at 1 ns). At 50 pN, the hydrogen
bonds between strands A and B are maintained, whereas at 150 pN they are broken.
Subplot (b) displays the corresponding force–extension relationship obtained from
the simulations. The discontinuity observed between 50 and 100 pN corresponds
to an abrupt extension of the module by 4–7 Å caused by the rupture of the AB
hydrogen bonds, and the subsequent extension of the partially freed polypeptide
segment. Reprinted with permission from Macmillan Publishers Ltd., Nature [6]  c
1999

Molecular dynamics is a useful method to


• Perform virtual experiments, that is, computational experiments
• Implement a computational microscope to visualize and analyze micro-
scopic processes
• Gain fundamental understanding about behavior of materials
Further, molecular dynamics:
• Has an intrinsic length scale, given by the distance of atoms in the material
(typically on the order of the length of a chemical bond), that is, between
1 and 5 Å
• Handles stress singularities (e.g., at crack tips) intrinsically
• Is ideal for deformation under high strain rate and extreme conditions,
which are not accessible by other methods (such as the finite element
method, discrete (mesoscale) dislocation dynamics, and other simulation
approaches)

You might also like