Download as pdf or txt
Download as pdf or txt
You are on page 1of 66

Molecular Simulation/Printable version

Molecular Simulation

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Molecular_Simulation

Permission is granted to copy, distribute, and/or modify this document under the terms of the
Creative Commons Attribution-ShareAlike 3.0 License.

Contents
Charge-Charge Interactions
Example
Further Reading
Charge-Dipole Interactions
Charge-Dipole Interactions
Derivation of Charge-Dipole Interactions Energy
Example: Charge-Dipole Interaction
References
Dipole-Dipole Interactions
Derivation
Quadrupole-Quadrupole Interactions
The Quadrupolar Phenomenon
Calculating Quadrupole-Quadrupole Interactions
Applications of Quadrupole-Quadrupole Interactions to Molecular Organization
Appendix
Induced Polarization
Units of Polarizability
Polarizability in Intermolecular Interactions
Repulsive Interactions
Pauli Exclusion Principle
Exchange Force
Comparison to Attractive Forces
Van der Waals Model of Fluid (Hard Sphere)
Sources
See Also
Hydrogen Bonds
Hydrogen Bonding in Liquids
Hydrogen Bonding in Solids
Strength of Interactions
Dependence on Distance
Rotational Averaging
Rotational Averaging
Derivation of Rotationally-Averaged Charge-Dipole Interaction Potential
The Lennard-Jones Potential
Pauli Repulsion
Further Reading
London Dispersion
Combination Rules
Example
References
See Also
Intramolecular Forces
Bond Stretches
Bond Angle Bending
Torsional Rotations
Improper Torsion
Internal Non-Bonded Interactions
Molecular Mechanical Force Fields
General Form of Potential Energy Function
Parmaterization
Empirical Parameters
Calculating Electrostatic Interactions
Target Electrostatic Potential Data from Quantum Chemical Calculations
Molecular Mechanical Partial Atomic Charges
Generalized Amber Force Field (GAFF)
Lennard-Jones Potential
Bonding Interactions
Bond Stretch
Angle Stretch
Dihedral Parameters
Calculation of molecular forces using quantum chemistry
Intermolecular Interactions
Basis Set Superposition Error
Counterpoise Corrections
Intramolecular Interactions
Dihedral Potentials
Statistical properties
Probability Distributions
Gaussian Distribution
Conformational Distribution
Calculating Physical Properties
Averages
Example: Conformationally Averaged Dipole Moment
Variance
References
Thermodynamic ensembles
Microcanonical ensemble
Canonical ensemble
Isothermal-Isobaric ensemble
Phase space
The Hamiltonian
Phase Space
Phase Point
Example
The Classical Partition Function
Many Body Systems
References
Dilute gases
Ideal Gas Law
Virial Theorem
Hard Sphere Potential
Lennard-Jones Potential
References
Potential of mean force
Relation to Radial Distribution Function
Derivation
Sources
See Also
Langevin dynamics
Velocity of the Particle
Displacement of the Particle
References
Periodic Boundary Conditions
Non-Bonded Interactions Under Periodic Boundary Conditions
Minimum Image Convention
Periodic Boundary Conditions with NAMD
References
Further Reading
Treatment of Long Range Forces
Cutoffs
Ewald Decomposition
Particle Mesh Ewald
See Also
Monte Carlo Methods
Monte Carlo Integration
Metropolis Monte Carlo Simulation
Sampling
Limitations of Monte Carlo Simulations
Equilibration
Equilibrium properties
See Also
Molecular Dynamics
Verlet Algorithm
Derivation of the Verlet Algorithm
Derivation of the Velocity Verlet Algorithm
See Also
Time step
Time reversibility
Sampling Thermodynamic Ensembles Using Molecular Dynamics
Microcanonical ensemble (NVE)
Sampling Different Ensembles
Canonical ensemble (NVT)
Isothermal–isobaric ensemble (NPT)
Visualizing the MD Simulations
See Also
Literature
Molecular Dynamics of the Canonical and Isothermal-Isobaric Ensembles
The Canonical and Isothermal-Isobaric Ensembles
Thermostats/ Barostats
Nose-Hoover thermostat derivation
Umbrella Sampling
Biased molecular dynamics simulations
Acquire free energy profile from biased simulations
Choice of Bias Potential
Calculation of the Potential of Mean Force from Umbrella Sampling Data
See also
References
Free Energy Methods
Replica Exchange Molecular Dynamics
Advantages
The Acceptance Probability
See Also
Coarse grain models
The Mathematical Representation
Common Coarse-Grain Model
The MARTINI Model
See also
References
Polarizable force fields
Polarizability
Drude Model (also known as Charges on Springs)
AMOEBA Model (Atomic Multipole Optimized Energetics for Biomolecular Applications)
References
Radial Distribution Functions
Radial Distribution Function: Solids
Radial Distribution Function: Gases
Radial Distribution Function: Liquids
Coordination Numbers
See Also
References
Dielectric constants
Diffusion Coefficients
Einstein Equation
Green-Kubo Theory
See Also
Interfacial properties
Surface Tension
Surfactants
See Also
Transport properties
Viscosity
Thermal Conductivity
See Also
Solids
The Structure and Dynamics of Simple Solids
Differences in the Radial Distribution Function
References
Definition of variables
Molecular Dynamics Simulations Using NAMD
Charge-Charge Interactions
All intramolecular and intermolecular interactions are the result of electrostatic interactions between charged particles. All molecules are comprised
the three subatomic particles: protons, neutrons, and electrons. Neutrons do not carry a charge, but protons and electrons carry charges with equal
magnitude but opposite sign. The magnitude of these charges are fixed. This value is elementary charge, e. By convention, protons are defined as
having positive charges and electrons are defined as having negative charges. The magnitude of these charges have a constant value known as the
elementary charge, e=1.602176565(35) × 10-19 C. ε0 is the vacuum permittivity constant, which is equal to 8.854187817... 10-12 F/m (farads per
metre).

The force between two charged particles due to this electrostatic interactions is,

Coulomb's law (force)

In this equation, is the distance between particles A and B. The charge of a particle is given by the
variable q. A charge is a scalar quantity with a sign and a magnitude.
The potential energy surface of the
It is often more convenient to discuss intermolecular forces in terms of the potential energy of the interaction of two charged particles,
interaction. The potential energy of the interaction of two charged particles, labelled A and B, can be calculated using Coulomb's law.
determined by integrating the force experienced between the particles if they were moved from infinite
separation where the intermolecular interaction is zero, to the distance ( ) they are actually separated by,

Coulomb's law (potential energy)

Ionic molecules have charge-charge Coulombic interactions. If the charges have the same sign (e.g., two positive ions), the interaction is repulsive.
If the charges have the opposite sign, the interaction is attractive.

Example
What is the potential energy of the electrostatic interaction between an Na+ ion and a Cl- ion separated by 5.00 Å?

Solution: These are both ions at intermediate distances, so their electrostatic interaction can be approximated using Coulomb's law. The charge-
charge distance is given (5 Å). The charges, qA and qB, are the elementary charge of a proton/electron for the Na+ cation and the Cl- anion,
respectively. These distances and charges must be converted to the SI units of m and C, respectively.

Further Reading
Wikipedia:Coulomb's law
Charge-Dipole Interactions
Charge-Dipole Interactions
Charge-Dipole interactions occur in the presence of a atom with a formal net charge such as Na+ (qion =
+1) or Cl- (qion=-1) and a dipole.
A dipole is a vector ( ) which connects two charged species of different
signs i.e (qion=+1 with qion=-1 NaCl) over a distance

The dipole moment of a molecule depends on a few factors.


A cation (+) has an attractive
1. Differences in electronegativity of the atoms in the molecule. The larger the difference, the interaction with the negative pole of a
greater the dipole moment.
dipolar molecule, but a repulsive
2. The distance between the positive and negative ends of the dipole. A larger distance interaction with the positive pole of a
corresponds to a larger dipole moment. dipolar molecule.
3. Multiple polar bonds pointing in the same direction increase the molecular dipole moment
were as if they are pointing in opposite directions the dipoles will cancel out each other.

For diatomic molecules, the dipole moment can be calculated by the formula were is the dipole
moment q is the atomic charges and r is the distance.

Vector geometry can be used to


derive an expression for the potential
energy of an ion interacting with a
dipolar molecule in terms of the
distance and angle of orientation
between the ion and the center of the
dipole.

polar bonds point in the


same direction increase
the molecular dipole polar bonds in
moment opposite
directions will
cancel out

from the potential energy of a Charge-Dipole interaction equation

you can see that there are 4 factors that affect the potential energy

the distance from the charge (qion) to the dipole ( )


the magnitude of the dipole ( )
the magnitude of the charge (qion)
the angle of the interaction

Depending on the angle the interactions can be repulsive when the charge is close to the like side of the dipole (i.e, + +) attractive if the charge is
close to the opposite charged end of the dipole (i.e., + -) and zero if the positive and negative ends of the dipole are equal distances from the charge
(e.g., perpendicular).

Derivation of Charge-Dipole Interactions Energy


Express interaction as the sum of Coulombic interactions.

Distance from charges to poles


This approximation is valid of the distance between the charge and dipole (r) is much greater than the length of the dipole, making the
term negligible.

Substitute lengths into the potential energy equation

Example: Charge-Dipole Interaction


What is the interaction energy of a ammonia molecule (μ=1.42 D) with a Cl- ion that is 2.15 Å away when the dipole moment of ammonia is
aligned with the Cl- ion at (θ=0)?

References
Dipole-Dipole Interactions
Dipole-dipole interactions occur between two molecules when each of the molecules has a permanent
dipole moment. These interactions can be attractive or repulsive depending on the variable orientation of the
two dipoles.

A permanent dipole moment occurs when a molecule exhibits uneven charge distribution. This is caused by
atoms of different electronegativities being bonded together. The electron density of the molecule will be
polarized towards the more electronegative atoms. This excess of negative charge in one part of the
molecule and the deficit of charge in another area of the molecule results in the molecule having a
permanent dipole moment.
Two molecules that possess
The orientation of the two interacting dipoles can be described by three angles (θ1, θ2, Φ). Angle Φ is the permanent dipole moments have a
dihedral angle between the two dipoles, which corresponds to the rotation. It is this angle which accounts dipole-dipole interaction. Attractive
for the greatest change in potential energy of the electrostatic interaction. Attractive dipole-dipole interactions occur between the
interactions occur when oppositely charged ends of the dipole moments of two molecules are in closer opposite-sign poles of the molecules
proximity than the likely charged ends. When the opposite and like charged ends of the two dipoles are (+/-). Repulsive interactions occur
equidistant, there is a net-zero electrostatic interaction. This net-zero interaction occurs when the dihedral between the like-sign poles (+/+ and
angle Φ is at 90°, i.e. the two dipoles are perpendicular to each other. -/-).

Dipole-dipole interaction in repulsive-


Two dipoles arranged in an
parallel interaction orientation
orientation with a net-zero
electrostatic interaction because the
opposite-signed poles are at the
same distance as the like-signed
poles.

The interaction between two dipolar


molecules (AB and CD) can be
described using vector geometry.

Derivation
The potential energy of a dipole-dipole interaction can be derived by applying a variation of Coulomb's law for point charges. By treating the two
dipoles as a series of point charges, the interaction can be described by a sum of all charge-charge interactions, where the distance between charges
is described using vector geometry.

Similarly to the derivation of a charge-dipole interaction, the derivation for dipole-dipole interactions begins by defining the distance between the
centre of the two dipoles as r, and the length of each dipole as l. Using a method similar to the derivation of a charge-dipole interaction,
trigonometry can be used to determine the distance of each vector corresponding to an interaction: , , , and .

This equation can be simplified so that the potential energy can be determined as a function of four variables:

Where μ1 and μ2 are the magnitude of the two dipole moments, r is the distance between the two dipoles and θ1, θ2, and Φ describe the orientation
of the two dipoles. This potential energy decays at a rate of .
The ideal attractive potential occurs when the two dipoles are aligned such that Φ = θ1 = θ2 = 0. The interaction potential energy can be rotationally
averaged to provide the interaction energy thermally-averaged over all dipole orientations.
Quadrupole-Quadrupole Interactions
The Quadrupolar Phenomenon
A molecular quadrupole moment arises from an uneven distribution of charge inside a molecule. Unlike a
molecular dipole moment, quadrupole moments cannot be described using two point charges separated by a
distance.

Quadrupoles are the 2nd-order term in the multipole expansion of an electric field of a molecule. The general
form of a molecular quadrupole is a rank-two tensor (i.e., a matrix). For linear quadrupoles, we can
describe the quadrupolar interaction by only looking at the moment.

In chemistry, some special cases arise when looking at linear molecules such as carbon dioxide (CO2) and
carbon disulfide (CS2). Each of these molecules has a quadrupole moment, but they are of opposite sign.
These phenomena can be explained by invoking Pauling electronegativity arguments. Oxygen has a greater
Contour plot of the equipotential
electronegativity than carbon, so there is more electron density around the terminal oxygen atoms than in the
surfaces of an electric quadrupole
centre. The opposite is true for carbon disulfide: the sp hybridized carbon is more electronegative than the
field.
terminal sulfur atoms and so electron density is localized around the centre of the molecule. The physical
manifestation of the two different electronic distributions is that carbon dioxide has a quadrupole moment of
-39 C·m2, while that of carbon disulfide is = +1.2×10-39 C·m2

Calculating Quadrupole-Quadrupole Interactions


The interaction energy of two linear quadrupoles is given by,
Figure 1. Molecular structures
showing the regions of partial charge
Linear Quadrupole–Quadrupole Interaction Energy
in a) carbon dioxide and b) carbon
disulfide

The quadrupole–quadrupole interaction energy is proportional to the reciprocal of the distance between the
two particles ( ) to the power of 5. Due to this higher exponent, quadrupole–quadrupole interactions
become weaker as the intermolecular distance is increased at a faster rate than charge-charge interactions or
charge-dipole interactions. The quadrupolar interaction energy also depends on the orientation of the two
molecules with respect to each other. This is expressed in the orientational term, , which ranges
between -1 and 1.

Applications of Quadrupole-Quadrupole Interactions to


Molecular Organization
Quadrupole-quadrupole interactions fall under the category of electric phenomena. That is, they are
governed by Coulomb's Law just like charge-charge and dipole-dipole interactions. Thus, the conventions
of 'like-repels-like and 'opposites attract' are permitted in describing quadrupole-quadrupole interactions.
See Figure 1. for a visualization.

Benzene (C6H6) has a quadrupole moment resulting from its π bonding system. The most stable geometries
of a complex comprised of two benzene molecules are the 'T-shaped' and 'parallel displaced', where the
quadrupole-quadrupole attractive interaction is strongest.

Figure 2. CO2 molecules have


quadrupole–quadrupole interactions
with each other. The central carbon
carries a partial positive charge,
while the terminal oxygens carry
partial negative charges. The like-
charge atoms have repulsive
electrostatic interactions (+/+ and -/-)
while the oppositely-charged atoms
have attractive electrostatic
interactions (+/-).
Figure 3. Schematics of a) the regions of partial charge in benzene, b) the parallel-displaced attractive orientation of two
benzene molecules and c) the T-shaped attractive orientation of two benzene molecules

Hexafluorobezene (C6F6) is another aromatic molecule which possesses a quadrupole moment, although it is negative. A partial negative charge on
the periphery of the ring is created by the strongly electron-withdrawing fluorine atoms, which cause the centre of the ring to be electron deficient.
The favoured alignments of 2 hexafluorobenzene molecules are still 'T-shaped' and 'parallel displaced'. Because of the complementary differences
in electronic distribution, benzene and hexafluorobezene can now stack in a parallel geometry.

Figure 4. Schematics of a) the regions of partial charge in hexfluorobenzene and b) the


interaction orientation of hexafluorobenzene and benzene molecules

Appendix

Complete Equation for Charge-Linear Quadrupole Interaction Energy

Complete Equation for Dipole-Linear Quadrupole Interaction Energy

Complete Equation for Linear Quadrupole–Quadrupole Interaction Energy


Induced Polarization
When a molecule is exposed to an electric field ɛ, electron density is polarized away from the nuclei. A
dipole is thus created due to the asymmetric distribution of electron density. The induced molecular dipole
created is proportional to the strength of the electric field (ɛ) as well as the polarizability of the molecule, α.

Polarizability allows us to better understand the interactions that occur between non polar atoms and
molecules and other electrically charged species. In general, polarizability correlates with the interaction
between electrons and the nucleus.

Units of Polarizability
Dimensional analysis of induced polarization.

For convenience we will let:


In isolation, atoms have spherical
charge distributions with no dipole
moment (top). Applied electric fields
( ) push electron density in the
direction of the field, creating a
depletion of charge on one side of
the atom ( )and an increase in
Polarizability has units of volume, which roughly correspond to the molecular volume. A large charge on the other side ( ). This
polarizability corresponds to a "soft" cloud of electron density, which is easily distorted by electric field. creates an induced dipole moment.

Polarizability in Intermolecular Interactions


When we consider the effect that a charge (Q), has on a neutral atom at distance r, an induced dipole is created:

Where the electrostatic force is defined by:

Charge-induced dipole electrostatic force can be approximated by the electrostatic force between and minus the force between and .

Since the electric field at distance r from a point charge Q:

Take the derivative


Therefore:

We can thus get the intermolecular potential by integrating this force over r:

A schematic of how a (i) two neutral


atoms can have an attractive
electrostatic interaction when (ii) the
electron density of one atom (left)
spontaneously polarizes to form an
instantaneous dipole. (iii) This
polarizes the neighbouring atom.
Repulsive Interactions
Pauli Exclusion Principle
For a system of two indistinguishable fermions (particles of half integer spin: protons, electrons,...etc.), the total wavefunction is a mixture of
independent single-particle states and :
[1] , where a spatial variable.

Notice that the wavefunction vanishes if . In the case of an electron, the wavefunction also contains a spinor, , which distinguishes
between spin up and spin down states. For simplicity, assume that both electrons have the same spin. This results in the Pauli Exclusion Principle:

Two electrons of the same spin cannot exist in the same state.

Since there are only two allowable spins (+1/2,-1/2), this guarantees that each orbital contains a maximum of two electrons of opposing spin.

Exchange Force
The apparent force that is observed when electron orbitals overlap originates from the requirement that the wavefunction be antisymmetric under
exchange,

This is commonly known as the exchange force. When the spin is symmetric (ex. , ) the spatial
wavefunction must be antisymmetric (antibonding), resulting in a repulsive interaction. This is also the case
for filled electron orbitals; helium with an electron configuration does not bond with itself because such
interacting electrons reside in the destabilized antibonding orbital (verified with MO-diagram of He–He). Electron Density Repulsion

The Pauli repulsion energy[2] is of the form

, where , , are constants.

Notice that this parallels with the radial distribution of electron density around a nucleus (Figure: Electron Density Repulsion). Intuitively, the
phenomenon of Pauli repulsion should only exist in the environment of electrons.

It is of practical advantage to replace the exponential function with the simpler polynomial expression as in

, where is a constant.

A comparison of the polynomial and exponential form of the potential is illustrated on the right (Figure:
Comparison of Pauli Repulsion Formulae). While the functions appear to be very different for small , the
difference is immaterial considering that the probability of being in such a high energy regime is so low.
Comparison of Pauli Repulsion
Formulae
Comparison to Attractive Forces

Electrostatic Potential

The potential for electrostatic interactions, such as charge-charge, charge-dipole, dipole-dipole, and quadrupole-quadrupole, always takes on some
variation of the form above, where and are integers signifying the order of the pole in the multipole expansion[3] (see Table: Range of
Electrostatic Interactions).
Table: Range of Electrostatic Interactions
Monopole Dipole Quadrupole

n=0 n=1 n=2

Monopole

m=0

Dipole

m=1

Quadrupole

m=2

For the sake of completeness also consider the charge-induced interactions

and and dipole-induced interactions

, , where is the polarizability of the molecule, is the


ionization potential, and is the charge. Attraction and Repulsion

For electrostatic interactions following this general form, the potential energy for the interaction between
oppositely-charged particles will approach negative infinity as the distance between approaches zero. This
would suggest that if electrostatic interactions are the only forces in our model, oppositely-charged ions would collapse onto each other as
approaches zero. Clearly this is an inaccurate representation of close range interactions, and some repulsive term must also be considered.

Pauli repulsion is a comparatively short range interaction, with . The slope of this curve is very large for small internuclear
separations, . That is to say, the repulsive force rapidly diverges to infinity as the distance between nuclei approaches zero. For this reason it is said
that the Pauli repulsive term is much "harder" than the other electrostatic forces.

Van der Waals Model of Fluid (Hard Sphere)


In liquids it is very improbable to have two particles at a highly repulsive distance (i.e., at distances where these functions diverge). Therefore, it is
unnecessary to provide a perfect model at these distances, as long as it is repulsive. It is insightful to consider a fluid as collection of hard spheres,
interacting with only Pauli repulsive forces. Neglecting the attractive forces between particles it justified by the following:

Attractive forces are weak and isotropic. For a uniform (constant density) fluid these forces will average to zero.
The cohesive force attributed to the change in entropy of gas to liquid phase is small for a dense gas; there is a small structural
change between gas and liquid.
Repulsive forces are extremely strong to prevent any arrangement where atoms overlap.
Liquids arrange to minimize volume by maximizing packing of these hard spheres.  

Sources
1. Griffiths, David J. Introduction to Quantum Mechanics. 2rd ed., Pearson Prentice Hall, 2004.
2. Stone, A. (2013). The theory of intermolecular forces. 2nd ed. Oxford: Clarendon Press, pp.146.
3. Stone, A. (2013). The theory of intermolecular forces. 2nd ed. Oxford: Clarendon Press, pp.43-56

See Also
Lennard-Jones Potential
Exchange_Interaction
Pauli_exclusion_principle
Multipole Expansion
Hydrogen Bonds
Hydrogen bonding is a type polar binding between a hydrogen atom that is bound to another atom with a large electronegativity, these then form
electrostatic interactions with a hydrogen bond acceptor. Hydrogen bond acceptors fall into two main categories: atoms with a large negative charge
such as Cl- or Br- and another compound with a polar bond, such as a ketone or carboxylic acid. These bonds are a subclass of dipole-dipole
interactions. These interactions are found in many different scenarios, they are present in some organic solvents, solid crystal structures, proteins and
DNA.

Hydrogen Bonding in Liquids


Hydrogen bonds are present in organic liquids like water, ethanol, hydrogen fluoride, and N-methylacetamide. These hydrogen bonds are partially
responsible for the large enthalpies of vaporization, along with higher boiling points than hydrocarbons with comparable sizes. For example, the
boiling point of hexane is 68 °C but the boiling point is hexanol is 135 °C. This is due to the hydrogen bonds that can be formed between the
hydrogen atoms on the alcohols interacting with the oxygen atoms on other hexanol molecules.

Hydrogen Bonding in Solids


Hydrogen bonds are also present in solids. The bonding in the solids is observed to allow for maximum
packing of molecules that will give the solid the maximum number of hydrogen bonds leading to a stronger
crystal. This can be observed in solid water, and is responsible for some of waters unique properties. Each
water molecule is capable of donating two hydrogen bonds and accepting two hydrogen bonds.

Strength of Interactions
Magnitudes of hydrogen bonds are dependent on a couple different factors, the first is the difference in The potential energy surfaces of the
electronegativity of the atom that hydrogen is attached. The larger the difference in electronegativity the dimerization of hydrogen fluoride
stronger the interaction will be. The second being the radius of the acceptor, it needs to be of a size that will (black) and chloromethane (blue)
allow for the hydrogen to get close enough to interact. Generally the Slater radius should be between 0.65 – calculated using MP2/aug-cc-pVTZ.
1.00 Å, any larger than this will not allow for the hydrogen to become close enough to the atom to Although the dipole moments of the
attractively interact. The figure on the right illustrates the potential energies of the dimerization of HF and two molecules are very similar, HF
CH3Cl, They both have very similar dipole moments, and but due to the hydrogen bonding the interactions has a much more negative
dimerization energy because the
between the HF molecules a larger potential energy minimum can be obtained, illustrated by the larger well
molecules can reach a shorter
depth. Also in this figure the well appears at a lower radius for the HF, and this is due to the previously
intermolecular distance before
mentioned dependence on the radius of the acceptor, the F is smaller than the Cl allowing for the hydrogen
repulsive interactions become
to get closer for stronger interactions.
The directionality of the hydrogen bond also has effects on the
dominant.
strength of the bond, the optimal angle of interaction is 180°, while the optimal distance for interaction is
between 1.5 – 2.5 Å. Planar molecules can contain very strong hydrogen bonding capabilities. The
molecule being flat allows for constructive bond dipoles and it avoids steric clashes between other atoms in
the molecule.

Dependence on Distance
The strength of hydrogen bonds dissipate rather quickly, they follow the same exponential trend that dipole
dipole interactions follow, which is illustrated by the equation below:

Dipole-Dipole Interactions Crystal structure of the Ih phase of


ice

Where θ1 , θ2 and Φ are the two angles of orientation and the dihedral angles respectively. This equation illustrates that there is an inverse
relationship between the strength of the bond and the distance between the dipoles cubed.

Effect of Radius
Rotational Averaging
Rotational Averaging
Rotational averaging describes the contribution to the potential energy from the rotational orientation of a
charge-dipole interaction. Expectation values are utilized to give a single optimal value for the system's
potential energy due to rotation.

For example, take a charged particle and a molecule with a permanent dipole. When they interact, the
potential energy of this interaction can easily be calculated. For a dipole of length , with a radius of
between the dipole centre and the charged particle, the energy of interaction can be described by: An ion interacts with a polar
molecule. Vector geometry can be
Potential Energy of a Charge Dipole Interaction used to describe the energy of this
interaction.

where is the charge of the particle, is the dipole moment, is the angle between and the dipole vector, is the vacuum permittivity constant,
and is the radius between the particle and dipole.

Geometrically, this interaction is dependant on the radius and the length of dipole, as well as the orientation angle. If the radius between the ion
and dipole is taken to be a fixed value, the angle still has the ability to change. This varied orientation of results in rotation of the dipole about its
center, relative to the interacting charged particle. The weights of various orientations are described by a Boltzmann distribution expectation value,
described generally by:

Expectation value (discrete states)

Expectation value (continuous states)

where is the expectation value, is the energy value for a particular configuration, is Boltzmann's constant, and T is temperature. This
Boltzmann-described weighting is the sum over the quantum mechanical energy levels of the system. Therefore, the probability is directly
proportional to , indicating that at a specific temperature lower energy configurations are more probable. An equation can then be

derived from this general expression, in order to relate it to the geometry and energy of a charge-dipole interaction.

Derivation of Rotationally-Averaged Charge-Dipole Interaction Potential


The orientationally averaged potential energy is the expectation value of the charge-dipole potential energy averaged over

Starting with the potential energy of a charge-dipole interaction

We let

This makes

The average over the dipole orientation using the expectation value in classical statistical mechanics is:
Note: When integrating over an angle, the variable of integration becomes

To solve this integral we must first use first order Taylor's series approximation because integrals of exponential of do not have analytical
solutions.

The first order Taylor's series approximations is as follows:

Using the Taylor series with gives:

The integral now becomes:

Multiplying into the brackets will give:

All terms that do not depend on are constants and can be factored out of the integral. The terms can be expressed as 4 integrals:

We must use trigonometric integrals to solve each of these for integrals:

First:

Second:

Third:

Fourth:

Plugging each trigonometric solved integral back into the equation gives:

Finally replace with to give:


The Orientational Average of the Charge-Dipole Interaction Energy
The Lennard-Jones Potential
The Lennard-Jones potential describes the interactions of two neutral particles using a relatively simple
mathematical model. Two neutral molecules feel both attractive and repulsive forces based on their relative
proximity and polarizability. The sum of these forces gives rise to the Lennard-Jones potential, as seen
below:
[1]

where ε is the potential well depth, σ is the distance where the potential equals zero (also double the van der
Waals radius of the atom), and Rmin is the distance where the potential reaches a minimum, i.e. the The Lennard-Jones 6-12 potential
equilibrium position of the two particles. approximates the intermolecular
interactions of two atoms due to
The relationship between the and is Pauli repulsion and London
dispersion attraction. The potential is
defined in terms of the well-depth ( )
Pauli Repulsion and the intercept ( ). Other
formulations use the radius where
The first half of the Lennard-Jones potential is Pauli-Repulsion. This occurs when two closed shell atoms of the minimum occurs, , instead
molecules come in close proximity to each other and their electron density distributions overlap. This causes of .
high inter-electron repulsion and extremely short distances, inter-nuclei repulsion. This repulsion follows an
exponential distribution of electron density:

where A and c are constants and r is the intermolecular distance. In a liquid, however, it is very unlikely that two particles will be at highly repulsive
distances, and thus a simplified expression can be used by assuming the potential has a r-12 dependence (note that this high exponent means that the
energy of repulsion drops off extremely fast as molecules separate). The resulting simple polynomial is as follows:

Where the C12 coefficient is defined as:

Buckingham[2]

Further Reading
Buckingham potential
Exchange interaction

London Dispersion
The Second half of the Lennard-Jones potential is known as London dispersion, or induced Dipole-Dipole interaction. While a particular molecule
may not normally have a dipole moment, at any one instant in time its electrons may be asymmetrically distributed, giving an instantaneous dipole.
The strength of these instantaneous dipoles and thus the strength of the attractive force depends on the polarizability and ionization potential of the
molecules. The ionization potential measures how strongly outer electrons are held to the atoms. The more polarizable the molecule, the more its
electron density can distort, creating larger instantaneous dipoles. Much like with Pauli Repulsion, this force is dependent on a coefficient, C6, and
also decays as the molecules move further apart. In this case, the dependence is r-6

Where α' is the polarizability, usually given as a volume and I is the ionization energy, usually given as electron volts. Lastly, the C6 constant can
also be expressed by the variables as seen in the Lennard-Jones equation:
Combination Rules
In the case of two separate molecules interacting, a combination rule called the Lorentz-Berthelot
combination rule can be imposed to create new σ and ε values. These values are arithmetic and geometric
means respectively. For example, an Ar-Xe L-J plot will have intermediate σ and ε values between Ar-Ar
and Xe-Xe. An example of this combination rule can be seen in the figure to the right.

The estimation of Lennard-Jones


Example potential parameters for mixed pairs
of atoms (i.e., AB) using the Lorentz-
Calculate the intermolecular potential between two Argon (Ar) atoms separated by a distance of 4.0 Å (use Berthelot combination rule.
ϵ=0.997 kJ/mol and σ=3.40 Å).

References
1. Lennard-Jones, J. E. (1924), "On the Determination of Molecular Fields", Proc. R. Soc. Lond. A 106 (738): 463–477,
doi:10.1098/rspa.1924.0082, Bibcode: 1924RSPSA.106..463J.
2. R. A. Buckingham, The Classical Equation of State of Gaseous Helium, Neon and Argon, Proceedings of the Royal Society of
London. Series A, Mathematical and Physical Sciences 168 pp. 264-283 (1938)]

See Also
Wikipedia:Lennard-Jones potential
Chemwiki: Lennard-Jones (http://chemwiki.ucdavis.edu/Physical_Chemistry/Physical_Properties_of_Matter/Int
ermolecular_Forces/Lennard-Jones_Potential)
Intramolecular Forces
Bond Stretches
Bond stretches are the alteration of the bond length within a molecule. Using water as an example we will
explain the different forms of stretches. The two modes of stretching are symmetric; to which in the case of
water the two oxygen hydrogen bonds lengthen and shorten at the same rate and asymmetric; to which they
lengthen and shorten at opposite times. The energy associated with this motion can be described using the
following equation:

The representation of the potential


energy of a torsional angle rotation
using a cosine representation.
We see a Hooke's law relationship where k is a constant for the bond length and the x2 term in the Hooke's
law equation is replaced with the change in bond length subtracted from the equilibrium bond length. This
will give an average change in bond length. This relationship will give us an accurate value for the energy
of the bond length of the given molecule.

Bond Angle Bending


Bond angle energies are brought about from an alteration of the optimized bond angle to a less favorable conformation. This can be either a
decrease in angle; the orbitals get closer to one another which will increase the potential energy, or increase in angle; the orbitals more away from
the ideal overlap which give the molecule favorable orbital interactions to increase the stability. This will increase the potential energy by moving
the molecule out of the low energy conformation. This energy relationship can be observed through the following equation:

We again see a hooks law relationship where k is a constant for the bond length and the x2 term in the Hooks law equation is replaced with the
change in bond angle subtracted from the equilibrium bond angle. This will give an average change in bond angle. This relationship will give us an
accurate value for the energy of the bond length of the given molecule.

Torsional Rotations
Torsional rotations, which can also be described as dihedral interactions come about when a system of 4 or
more atoms form a molecule. This interaction relies upon the rotation central bond into a favorable
orientation with the most stable overlap conformation. Some of these known conformations are called
periplanar, anticlinal, gauche, and antiperiplaner, listed in order of highest to lowest energy. Observing
Newman projections upon the energy we can see where the unfavorable confirmational interactions will lie.
The methyl groups here will be considered large and have unfavorable overlap with one another.
Mathematically this can be represented using the following equation:

The torsional potential energy


Where k is the rotational barrier of the system, n is the periodicity as the rotations repeat around 360 surface of butane.
degrees, ɸ is the dihedral angle, and δ is the offset of the function

Improper Torsion
An improper torsion is a special type of torsional potential that is designed to enforce the planarity of a set of atoms (ijkl) where i is the index of the
central atom. The torsional potential is defined as the angle between the planes defined by atoms ijk and jkl.

Internal Non-Bonded Interactions


The atoms of a molecule will also have intramolecular steric, dispersive, and electrostatic interactions with each other. These interactions within a
molecule determine its conformation. The steric repulsion as well as Coulombic and dispersive interactions all together in the conformation of the
molecule. Every atom in the system can be assigned a partial charge ( ). These interactions are calculated using standard Lennard-Jones and
Coulombic pairwise interaction terms,

Here we see the combination of the Leonard Jones equation as well as Coulomb's law. With this relationship we can calculate the force resulting
between the non bonding intermolecular forces. It should be noted at this point that interactions between bonded or atoms forming angles are
excluded from this relationship. These interactions have been included though the calculations in the previous sections.
Molecular Mechanical Force Fields
General Form of Potential Energy Function
The overall energy of a system is based on an enormous number of factors, and it is the sum of these individual attractive and repulsive forces
between all the atoms in the system that results in a final "force field". The potential energy for a set of atomic coordinates (r) is the sum of bonded
(ie intermolecular forces) and non-bonded (i.e., repulsion/dispersion and electrostatics) terms. This sum ultimately defines the molecular
mechanical force field.

Due to this complexity, calculating the potential energy for a set of molecules using a force field would require a complete set of known parameters,
which also require several different methods of determination. These methods are outlined in this section:

Interaction Term type Expression Parameters

Bonds Bonded spring constant (kbond), equilibrium length (re)

Angles Bonded spring constant (kangle), equilibrium angle (θe)

Dihedrals Bonded barrier height (kφ), periodicity (n), offset (δ)

Repulsion/Dispersion Non-bonded atomic radii (σ), well depth (ε)

Electrostatics Non-bonded partial atomic charges (q)

Parmaterization
Empirical Parameters

For small molecules, parameters may be defined to fit experimental properties. Liquids will have different bulk physical properties for each set of
non-bonded parameters. Each set of parameters in the force field will result in a different set of physical properties, the combinations of which can
be tested until properly representing the properties of the liquid. An example would be determining the proper model for bulk water assuming no
Lennard-Jones interactions for hydrogen atoms. Parameters would be tested together until the model properly represented the expected
characteristics of bulk water.

Calculating Electrostatic Interactions


Starting with the non-bonded terms, we can look at electrostatics. Every intermolecular pair of atoms has this electrostatic interaction, and can be
summed up in a form comparing all atoms in one molecule (A) to all atoms in another molecule (B). This quickly adds up to a massive sum even
when considering systems of a few complex molecules:

In regards to partial charges, as there is no rigorous way to translate charge distribution to point charges, certain constraints are formed: sum of
partial charges should equal formal charge of molecule (seen below), and equivalent atoms should carry the same charge. Charges can be chosen to
match gas phase dipole moment, be treated as free parameters and adjusted for target properties, and to fit quantum mechanical electrostatics:

Charge fitting schemes fit partial charges to the quantum mechanical electrostatic potential (the potential energy a unit point charge has at
position in space r). Potential can be calculated at a set of points around the molecule to generate electrostatic potential surface (ie: can map out the
electronic density around the molecule and determine the localized area of charges):

Target Electrostatic Potential Data from Quantum Chemical Calculations

First term represents electron density, second term represents nuclear charges at positions Ri:
Molecular Mechanical Partial Atomic Charges

Looks at partial atomic charges at positions Ri:

In terms of electrostatic charges, point charges (q) must be defined as such that ≈ as closely as possible, thereby fitting both
models.

Restraining terms can be introduced to give more practical models of molecules (known as Restrained Electronic Potential (RESP) fitting). Such
restraints limits charges to moderate and chemically relevant values. Restraints are also placed to ensure chemically equivalent groups retain the
same charges. These groups are given an atom type distinguished by element, hybridization and bonding partners, and parameters are defined
for each type. For example, all aromatic carbons part of the same benzene ring should have the same parameters.

Generalized Amber Force Field (GAFF)

GAFF is a commonly used force field designed for reasonable parameters for simulations of small organic molecules. GAFF involves the use of the
RESP method for assigning partial charges, and atom type defines the Lennard-Jones parameters and internal bonding terms that will be associated
with the atom. Atom type definitions can be extremely specific (e.g., H on aliphatic C with 3 electron-withdrawing groups; inner sp2 C in
conjugated ring systems).

Lennard-Jones Potential
A combination of the Pauli Repulsion and London dispersion attraction terms, the Lennard-Jones Potential is defined using two parameters: van der
Waals radius (σ) and well depth (ε). These parameters are responsible for packing of molecules in liquids, and must compensate for errors in all
other properties. van der Waals radius essentially sets the limit at which molecules may approach one another (potential energy > 0 when closer than
σ). Well depth sets the energy well of the potential energy, occurring when distance between molecules is at Rmin (ie radius related to minimum
potential energy). First term represents Pauli repulsion, second term represents London dispersion attraction:

Bonding Interactions
Bond Stretch
Atoms within a molecule may vary their distances (stretch or compress), and this relationship can be described as a spring. This is why Hooke's
Law can be applied to bond lengths to define the energy required to stretch/compress, according to spring constant (kbond) and equilibrium length

(re). Spring constant may be determined via the infrared vibrational frequency of the bond stretch (according to , while equilibrium

bond lengths can be found through the use of rotational spectroscopy and crystallography.

Angle Stretch
The potential energy of bending a bond angle can also be approximated by Hooke's law. Hooke's Law can again be applied to bond angles to
define the energy required to stretch/compress, according to spring constant (kangle) and equilibrium angle (θe). Again, the spring constant may be
determined via the infrared vibrational frequency of the bond stretch, following the same formula, while the equilibrium angle can be found through
the use of rotational spectroscopy and crystallography.

Dihedral Parameters
Also known as torsional rotations, dihedral relationships are produced with 4 atoms. This relates to how the central bond is rotated and the overlap
of the outer two atoms (designated as 1,4 interactions, with 2 and 3 being the central atoms). These overlaps may result in a higher or lower
potential energy depending on confirmation (i.e. gauche, trans, periplanar, etc), and the relationship is dependent on barrier height (kφ) (energy
required to rotate the central axis), periodicity (n) and offset (δ). Determination of these parameters can be done using quantum chemistry or
experimental isomer populations.
Calculation of molecular forces using quantum chemistry
Intermolecular Interactions
Intermolecular interactions can be calculated using quantum chemical programs.

The fragments are the atoms or molecules that comprise the complex.

Basis Set Superposition Error

Many quantum chemical programs use atom centered basis functions to represent the atomic orbitals. For a given calculation, the number of basis
functions used is fixed and finite. Inevitably, the description of the orbitals will be incomplete, in that a more accurate answer could be obtained if
more basis functions were added.

When calculating intermolecular interactions, the calculation of the will have the basis functions from all the fragments. At close
intermolecular contacts, the tails of these orbitals will extend close to the space of atoms in other fragments. In this way, the fragments will be
stabilized by the presence of these additional basis functions, so the energy of the complex includes both the stabilization due to the intermolecular
interaction and this "basis set superposition error."

Counterpoise Corrections

Intramolecular Interactions

Dihedral Potentials
Statistical properties
Statistical thermodynamics describes physical descriptions according to probability distributions.

Probability Distributions
A probability distribution is a function that shows the likelihood of an outcome. In statistical mechanics, this generally means the probability of of a
system being in a particular state.

Gaussian Distribution

A Gaussian distribution, also called a normal distribution is a function with a bell shaped distribution. Any random variable with this normal
probability density

is called a normal variable with mean and variance [1]

The Boltzmann Distribution (https://en.wikipedia.org/wiki/Boltzmann_distribution) is one such probability distribution that gives the probability
distribution as a function of a states energy and temperature if a system .
[2]

As shown below physical properties of a system can be calculated using this Boltzmann distribution.

Conformational Distribution

Within solids, liquids, and gases, atoms can take on different orientations and arrangements. Conformation changes within a molecule can occur by
rotations around bonds. For example, gauche or eclipsed conformations, trans or cis and any conformation in between these would all fall under the
distribution of conformation.

Calculating Physical Properties

Macroscopic properties are the average arrangement a system take son over time. Assuming all possible energy levels are continuous, we can
employ classical mechanics to calculate physical properties. This assumption is only valid when particles and heavy and forces are relatively soft.

Averages
The ensemble average in statistical mechanics refers to the mean of all possible states of a system as given by its probability distribution. It is
dependant on the ensemble chosen (i.e canonical, microcanonical etc...).

The expectation value gives the average value when any physical property of a system is measured. depending on the type of distribution, it
may not be the most probable value but is the probability weighted value which we expect to measure. In a classical system, the expectation value is
an integral over all of the possible configurations. The possible configurations are integrated over the interval .

where represents the property being calculated, represents the potential energy, represents the Boltzmann constant, and
represents the temperature of the system.

In classical statistical mechanics, the classical ensemble average is the normalized Boltzmann weighted integral over all phase space.
Where is the Hamiltonian (https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)) for the described system. This expression can be
used to find many physical properties such as the average energy of a single particle system and many-body systems by integrating over spatial
coordinates ( and integrating over the Maxwell distribution (

In a quantum mechanical system, the expectation value is the Boltzmann weighted sum over energy levels.

Example: Conformationally Averaged Dipole Moment

Conformational averaging using the Boltzmann distribution gives a way of finding the average dipole moment, as described in previous section.

The dipole moment of a molecule changes as its conformation changes as is it the vector sum of all bond moment and therefore the observed (or
'expected') value is a linear combination of the two conformations. For example, in 1,2-dichloroethane the trans conformation is non-polar but the
cis conformation is polar. Since the molecular dipole is a vector quantity, the conformationally-averaged dipole moment is the average of the square
of the individual dipole moments.

Using this formula, the conformationally averaged dipole moment for a molecule like 1,2 dichloroethane where the trans conformation has no
dipole and the cis does can be calculated.

Variance
Variance is value that indicates how widely spread a set of data is when compared to the average. The variance is the expectation of the square of
the standard deviation of a given set of data. If the standard deviation or variance is low, the data is close to the expected value. The variation of ,
a random variable, is the expected value of the square of the standard deviation from the mean. represents the expectation and represents the
mean. Variance can be represented by or .

References
Thermodynamic ensembles
The entire universe is governed by the laws of thermodynamics through the transfer of energy between matter. This process is too complex to
consider directly, instead we consider several simplifications where part of the universe (i.e., the system) is considered separately from the rest of its
surroundings. The system houses the process or thermodynamic state under consideration, whereas the surroundings contain everything else.

The system can be described using an ensemble, also known as a statistical ensemble, which is an idealization consisting a large number of copies
of a system, considered all at once, each of which represents a possible state that the real system might be in. A thermodynamic ensemble is a
statistical ensemble that is in statistical equilibrium. It provides a way to derive the properties of a real thermodynamic systems from the laws of
classical and quantum mechanics.

We can consider different systems with different degrees of separation from their surroundings, from completely isolated (e.g., microcanonical
ensemble) to completely open to its surroundings (e.g., grand canonical ensemble). However, for the purposes of molecular simulations only three
main ensembles will be considered. Namely, the microcanonical, canonical, and isothermal-isobaric ensembles

Microcanonical ensemble
The microcanonical ensemble is a system that is completely isolated from its surroundings. There is no transfer of energy or matter between the
system and the surroundings, and the volume of the system remains fixed. In other words, this is a physical system where the total number of
particles (N), the total volume (V), and the total energy (E) are constant. The microcanonical ensemble is typically abbreviated NVE.

The total energy of the system is constant, but there is no defined temperature of the microcanonical
ensemble. Temperature can only be defined for systems interacting with their surroundings. Isolated
systems, such as microcanonical, are only described in terms of their composition, volume, and total energy.
In statistical mechanics, a microcanonical ensemble is used to represent the possible states of a system
which have a specified total energy.

Canonical ensemble
In the canonical ensemble, the volume of the system is fixed, and energy can transfer across the boundary
between system and surroundings, but matter cannot. We can describe systems as being immersed in a heat
bath at a temperature (T), where the heat bath is much larger than the system. No amount of heat put off by
the system will significantly increase the temperature of the surroundings. The canonical ensemble applies
to systems of any size; while it is necessary to assume that the relative size of the heat bath is very large, the
system itself may be small or large. Now because the system and surroundings are in thermal contact, the
system will transfer heat (q) to and from the surroundings until they are in thermal equilibrium. Therefore,
unlike the microcanonical ensemble, the temperature of the canonical ensemble can be a defined constant
(T). This ensemble is typically abbreviated in terms of constant number of particles, total volume, and
Constant number of particles (N),
temperature (NVT).
volume (V), and total energy (E).
The canonical ensemble is a statistical mechanics ensemble that represents the possible states of a
mechanical system in thermal equilibrium with a heat bath at some fixed temperature. The system can
exchange energy with the heat bath, so that the states of the system will differ only in total energy. In the
microcanonical ensemble, the total energy is fixed, but in the canonical ensemble the energy is no longer
constant. It can take on a range of values depending on the temperature. The canonical ensemble is
important when attempting to describe the Helmholtz free energy of a system, which is the maximum
amount of work a system can do at a constant volume (V) and temperature (T).

Isothermal-Isobaric ensemble
In the isothermal-isobaric ensemble, energy can transfer across the boundary, but matter cannot. The
volume of the system can change such that the internal pressure of the system matches the pressure exerted
on the system by its surroundings. Similar to the canonical ensemble, we can describe the isothermal-
isobaric ensemble as a system immersed in a heat bath at a temperature (T), where the heat bath is much
larger than the system. No amount of heat put off by the system will significantly increase the temperature
of the surroundings.

The isothermal-isobaric ensemble is a statistical mechanical ensemble that maintains a constant total number
of particles, and constant temperature (T) and pressure (p), typically abbreviated NpT. This ensemble plays
an important role in chemistry since the majority of important chemical reactions are carried out under
constant pressure conditions. The isothermal-isobaric ensemble is also important when attempting to
describe the Gibbs free energy of a system, which is the maximum amount of work a system can do at Constant number of particles (N),
constant pressure (p) and temperature (T). volume (V), and temperature (T).
Constant number of particles (N),
pressure (p), and temperature (T).

Phase space
The Hamiltonian
The Hamiltonian, , is a function that calculates the sum of the potential and kinetic energy of a system. For the systems considered here, the
Hamiltonian is a function of the positions (r) and momenta (p) of the particles in the system. The potential energy, , depends only on the positions
of the atoms and the kinetic energy, , depends only on the momentum of the particles [1]. Therefore, the Hamiltonian can be separated
into a potential energy term that depends only on the positions of the particles ( ) and a kinetic energy term that depends only on the momenta
of the particles ( ),

The microcanonical ensemble is the statistical ensemble used to represent the isolated system, where the number of particles (N), the volume (V),
and the total energy (E) are constant.

Phase Space
The unique positions and momenta of each particle, for a system of N particles, can be accurately described by a 6 N dimensional spaced called
phase space. Each particle in a system has three unique position variables ranging from 0 to L and three unique momentum variables
ranging from to [1]. These 6N numbers constitute the microscopic state of the system at time t [2,3]. A system of N particles
corresponds to a phase space with 6 N variables, which describes every conceivable position and momentum combination of all particles in a
mechanical system [1,2,3].

Phase Point

A phase point refers to any one point in an N-body system at any time t [2]. The dynamics of the phase point are fully described by the motion and
trajectory of the phase point as it travels through phase space [2]. Every classical state can be considered to be equally probable as N, V, and E are
held constant [3].

Example

Consider a system containing two particles of Argon (labelled 1 and 2, respectively), where N = 2. Since phase space is described by 6 N
dimensions the total number of spatial variables becomes 6(2) = 12.

r1, position of particle 1:


r2, position of particle 2:
p1, momentum of particle 1:
p2, momentum of particle 2:

Argon is a noble gas and inert to intermolecular interactions with other particles in a system. Accordingly, the two Argon atoms in this hypothetical
system are non-interacting. When atoms are assumed to be non-interacting, potential energy interactions between particles are no longer accounted
for and = 0.
The Hamiltonian becomes the sum of the kinetic energies of the two particles:

The Classical Partition Function


In quantum mechanical systems, the system must occupy a set of discrete
(quantized) states. These states correspond to the energy levels of the system,
labelled A partition function can be defined a sum over these
states.

Where is the energy of state , is the Boltzmann constant and is the


temperature.

Explicit sums are practical when the energy states of molecules are Two Argon atoms inside phase space. The blue and green arrow
independent and there are no intermolecular interactions. In systems where vectors correspond to the atoms position and momenta,
intermolecular interactions are significant, it is generally more practical to respectively. The atoms are non-interacting, however they do
approximate the system as existing in terms of continuous variables. The states change trajectory as they bounce off the walls of the box.
are approximated to be continuous and can be integrated with respect to the
phase space of the system while simultaneously sampling every state of the
system. This leads to the classical partition function:

Many Body Systems

The expectation values of a classical system can be evaluated by integrating over phase space. The classical partition function is an integral over 6N
phase space variables:

References
1. McQuarrie, Donald A. 1973. Statistical Thermodynamics. New York, N.Y. Harper & Row, Publishers, Inc. Pages 117-118.
2. Tuckerman, Mark E. 2010. Statistical Mechanics: Theory and Molecular Simulation. Oxford England. New York: Oxford
University Press. Pages 2-5.
Dilute gases
Ideal Gas Law
Gasses are often described using ideal gas law, which relates the pressure of a gas to its density with a
simple expression.

Where is the density of the gas, is the pressure of the gas, is the Boltzmann constant, equal to
, and is the temperature in Kelvin. The expression is derived from
approximating a gas as point masses that have only kinetic energy and experience perfectly elastic
collisions. Unfortunately, this theory breaks down at densities where intermolecular forces become
A molecular dynamics simulation of
significant, that is, when the potential energy is non-zero. Ideal gas law is suitable for only very dilute
high-density argon gas at 300 K.
gasses.
Argon-argon interactions are
described using a Lennard-Jones
Virial Theorem potential. The gas density is 5282
mol per cubic meter. The pressure is
To more accurately describe the properties of dilute gasses the Virial equation of state is used. Virial approximately 13200 kPa.
theorem accounts for the effects of intermolecular forces, through an expansion into higher order functions
of the density. Mathematically, this is described using infinite power series, where and are
the second and third Virial coefficients.

At low densities the deviations from ideal gas behavior can be sufficiently described in the second Virial coefficient, .

Where is the volume of a gas and is the configurational integral. The configurational integral for the second Virial coefficient is the
contribution of every possible pair of positions weighted over its Boltzmann distribution. For the third Virial coefficient the configurational integral
would be the contribution of three interacting particles. By extension, the configurational integral for any nth Virial coefficient would be the
contribution of n particles interacting. For this reason higher Virial coefficients are quite complicated to derive, fortunately they are only necessary
for describing gasses at pressures above 10atm[3].
The configurational integral for two interacting particles is as follows:

Where is the potential energy of the interaction of a single pair of particles, and and are the positions of particles 1 and 2. The second virial
coefficient can be written in terms of pairwise intermolecular interaction potential, , if the position of particle 2 is defined relative to the position
of particle 1. The distance, , would then be the distance between the two interacting particles. The equation derived from this modification is as
follows:

A different is derived for the hard sphere potential and the Lennard-Jones potential.

Hard Sphere Potential


The hard sphere model approximates particles as hard spheres that cannot overlap, if the spheres are not overlapping then the potential energy is
zero and if they are overlapping then the potential energy is infinitely high. This approximation represents the very strong short range Pauli
repulsion forces. The equation for the potential energy is as follows:

Where is the potential energy and is the radius of the hard sphere. Integrating the configurational integral for the hard sphere potential gives,
, as the second Virial coefficient.
This model is crude and only accounts for repulsive forces, a slightly more accurate model
would be the Lennard-Jones potential model.

Lennard-Jones Potential
The Lennard-Jones potential is a combination of a polynomial repulsion term, , and a London dispersion attractive term, .

The and terms can be expanded, and internal energy, , can be expressed as:

Where is the potential well depth, is the intercept, and is the distance between the particles. The second Virial coefficient derived from the
Lennard-Jones potential has no analytical solution and must be solved numerically.

The Lennard-Jones model is a more accurate than the hard sphere model as it accounts for attractive interactions and the repulsive term is more
realistic than hard sphere repulsion. That being said, it is still limited in the fact that only London dispersion attractive interactions are considered,
making the model applicable to only nobel gasses.

References
Potential of mean force
The potential of mean force is one of the 3 major forces considered in stochastic dynamics models. The potential of mean force provides a free
energy profile along a preferred coordinate, be it a geometric or energetic coordinate, such as the distance between two atoms or the torsional angle
of a molecule. This free energy profile describes the average force of all possible configurations of a given system (the ensemble average of the
force) on particles of interest.. The potential of mean force can be determined in both Monte Carlo Simulations as well as Molecular Dynamics.

Relation to Radial Distribution Function


For a liquid, the potential of mean force is related to the radial distribution function,

The potential of mean force can be calculated directly by using histogram analysis of a trajectory from an MD or MC simulation. This analysis
calculates the probability of each possible configuration, and determines the free energy change associated with that state. In systems where parts of
the coordinate of interest will not be sufficiently sampled from a naive simulation, umbrella sampling, free energy perturbation, or other enhanced
sampling techniques can be used.

Derivation

By integrating this function over 2 distance values, the reversible work associated with moving the two particles can be found. This work is equal to
the change in Helmholtz energy associated with the change in the two states.

Sources
1. Chandler, D. Introduction To Modern Statistical Mechanics, Oxford University Press, 1987, QC174.8.C47

2. Tuckerman, M. Statistical Mechanics: Theory and Molecular Simulation, Oxford University Press, 2010

See Also
Wikipedia:Potential of mean force
Langevin dynamics
Langevin dynamics is used to describe the acceleration of a particle in a liquid.

1 [4]

The term term corresponds to the drag force divided by mass, m, of the particle. The drag force of the system is based on weak, long
ranged intermolecular forces between atoms or molecules. This drag force is created by the liquid surrounding a particle. The drag force is based on
the frictional constant of the system, , and the velocity of the particle, v. is . The frictional constant is proportional to the viscosity of the
surrounding liquid, and the radius of the particle found from Stokes' Law.[4]

The term represents the random force. The random force of the system is based on short, strong Pauli repulsion between atoms or molecules.
The random force in the liquid is experienced as a result of the collisions of a molecule with the surrounding liquid. This force changes rapidly with
time, and changes much faster than it's velocity in denser liquids.[4]

Velocity of the Particle


(1) is a first order linear differential equation and can be solved formally to give (2).

2 [4]

The ensemble average of velocity is,

This is because the acceleration of the particle is due to a random force, and so the average acceleration will be 0 (i.e., ).

At short time intervals ( ), the average velocity becomes,

At long time intervals ( ), the average velocity becomes,

Displacement of the Particle


The Langevin Equation is used to express the rate of change of a particle's velocity. This in turn can be used to calculate the diffusion as diffusion
depends on the velocity of a particle in a liquid. The Langevin equation can be used to sample the canonical ensembles of states. By integrating the
velocity from time 0 to time t, the displacement of a particle can be found.[4]

Substituting the definition for velocity from (2) into (3),and integrating by parts gives

Squaring both sides of (4) and taking the ensemble average gives

5 [4]

The derivation is as follows,


is a rapidly varying function of only and it is non zero only when is small. So can be re-written as
.[4] Let and , and the above equation becomes,

Let , and simplify, the above equation becomes

Equipartition theory applies when , and so , and the above equation becomes (5).[4]

At short time intervals ( ) the part of the equation representing Brownian motion goes to zero ( i.e.

Let . Then,

This corresponds to ballistic motion of the particle.[4] At short time scales, the effect of friction and collisions have not affected the movement of the
particle.

At long time intervals ( ), the velocity of the particle goes to zero.

At large values of t and so,

The diffusion constant D, is equal to .[4]

This corresponds to the particle undergoing a random walk.[4]

References
1. Rozanov Y.A. (2015) Probability Theory: A concise Course, Dover Publications, USA
2. McQuarrie, A. (2000) Statistical Mechanics, University Science Books, California
3. McQuarrie, D. A. Statistical thermodynamics; University Science Books: Mill Valley, CA, 1997.
4. McQuarrie, Donald (2000). Statistical Mechanics. Harper and Row. p. 452. ISBN 06-044366-9.
Periodic Boundary Conditions
Macroscopic systems are extremely large and are, therefore, expensive to compute by molecular
simulations. For instance, one gram of water has about 3 x 1022 molecules, a number too large to be
calculated, even on computers. Fortunately, periodic boundary conditions enable us to mimic an infinite
system by treating a relatively small part of a system to achieve a reasonable representation of the infinite
system. The particles of this small subsystem are controlled by a set of boundary conditions called a unit cell
(e.g., a three-dimensional box). During the simulation, particles are free to move in the central (original)
cell; therefore, their periodic images of the adjacent cells move in an identical way. This means any particle
that crosses one boundary of the cell, will reappear on the opposite side.

Non-Bonded Interactions Under Periodic Boundary Conditions


A schematic representation of
A particle in a periodic cell has non-bonded interactions with the other particles both in the original cell, and periodic boundary conditions. The
also in the surrounding cells (the periodic images of the original cell). The non-bonded potential energy ( unit cell of the system is in the
) can be written as: center cell (pink). The potential
energy of the system includes
interactions between this center cell
and its periodic images.

where , and is a translational vector to different cell images. For a periodic cubic cell
of length , becomes

where , , and are vectors of integers.


The number of possible translational vectors, , are infinite. A schematic representation of the
This will lead us to an infinite number of non-bonded interactions. Thus, we need to perform some non-bonded interactions treated by
approximations to deal with this problem. the minimum image convention.

The first term of the is the Lennard-Jones potential. This potential has an attractive component (
) for London dispersion, which is a short range interaction. Accordingly, if the distance between
interacting particles increases (i.e., r > 10 Å), the interaction becomes very weak. Thus, the interactions can
be truncated for distances greater than 10 A. However, the potential will lose its continuity because of the
unexpected truncation. This problem can be fixed by employing a switching function, which will smoothly
scale down van der Waals interactions to zero at the cutoff distance. For faster computation, pair lists are
designed to search for particles that could be inside the cutoff area within a specified range ( is
~2 Å larger than ), and, during the simulation, lists are updated.

The second term of the represents the long rang ( ) electrostatic potential, which cannot be
truncated the same way as the Lennard-Jones potential. However, the particle mesh Ewald method could be
used for treating the long range interaction part.
Liquids like water can be simulated
Minimum Image Convention using Periodic Boundary Conditions
(PBC). A unit cell of the liquid is
The periodic boundary conditions use the minimum image convention to calculate distances between replicated periodically.
particles in the system. Let us consider that we have four particles ( , , , and ) in a cubic box of length .
Then, the potential interactions will be calculated between the particle ( ) and the nearest periodic images of
the other particles ( , , and ). The minimum image convention was first used by Metropolis and coworkers.[1]

Periodic Boundary Conditions with NAMD


NAMD software requires three cell basis vectors to provide the periodic cell its size and shape: cellBasisVector1, cellBasisVector2, and
cellBasisVector3. Each vector is perpendicular to the other two. The coordinates of the center of the periodic cell can be specified by the
“cellOrigin” command. In addition, the “wrapAll on” command is usually used to ensure that any particle that crosses the boundary of the original
cell will be imaged back into the cell.

References
1. Metropolis, N.; Rosenbluth, A. W.; Rosenbluth, M. N.; Teller, A. H; Teller, E. J (1953). "Equation of State Calculations by Fast
Computing Machines". The Journal of Chemical Physics 21: 1078-1092.
Further Reading
Allen, M. P.; Tildesley, D. J. Computer Simulation of Liquids; Oxford University Press: New York, 1989.

Tuckerman, M. Statistical Mechanics: Theory and Molecular Simulation; Oxford University Press: New
York, 2010.

Molecular dynamics simulation of


hexane in a periodic 30 Å × 30 Å ×
30 Å simulation cell. Hydrogen
atoms are omitted for clarity.
Treatment of Long Range Forces
Treatment of long-range interactions in molecular simulations poses a serious challenge in terms of computational requirements. By employing
periodic boundary conditions, bulk properties of a sample may be calculated efficiently using a limited number of molecules within a finite cell.
Each molecule in the cell interacts with every other molecule, both within the same cell and within the other infinite periodic cell images. These
interactions include dispersion, repulsion, and any other electrostatic interactions. Essentially, this means that there are an infinite number of
intermolecular interactions that must be calculated during the simulation, which is a problem in practical computation.

As is common in molecular simulation, the Lennard-Jones potential is used to describe both dispersion and repulsion interactions. These
interactions, relative to other electrostatic interactions, decay fairly quickly, and can thus be dealt with fairly easily in computation. Electrostatic
interactions (such as charge-dipole, dipole-dipole, quadrupole-quadrupole, etc.) are often significantly stronger at a given distance , and decay at a
much slower rate as increases, with the rate of decay increasing as the exponent of increases. This makes their treatment somewhat more
complicated.

Cutoffs
Non-bonded cutoffs are employed so as to save computational resources without sacrificing validity of
results. These cutoffs are used to model long-range interactions between atoms or molecules in a simulation,
and essentially truncate the interactions at a specific distance. Because intermolecular interaction potentials
converge to a particular value (namely zero) as , the interaction potential can effectively be tuned to
this value after a certain distance. In the case of dispersion and repulsion interactions, which are modelled
by the Lennard-Jones potential, a switching function, , is employed in the simulation. This switching
function causes the decay of the interaction to reach zero by a desired cutoff distance, at which point the
two molecules or atoms involved in the intermolecular interaction do not interact through the Lennard-Jones
potential at all,
The unmodified Lennard-Jones
potential (blue) and the Lennard-
Jones potential with a smooth
potential-based cutoff using a
switching function (red). The
The intermolecular distance at which the switching function is turned on is the switch distance, which is switching function is active in the
generally about 2 Å less than the cutoff distance. In between the switch distance and the cutoff distance, the grey region. The switched-potential is
interaction potential is gradually scaled to zero by the switching function. This gradual change in interaction zero beginning in the blue region.
potential maintains the physical basis of the simulation without using an abrupt, non-physical cutoff of
interactions (as can be seen in the hard-sphere model for repulsive interactions). This method avoids a
discontinuity in the interaction potential that would be caused by an abrupt truncation of the interaction potential function.

Such a convenient treatment of long-range interactions is easily applied to dispersion and repulsion interactions modelled by the Lennard-Jones
potential, but is not applicable in all cases. For electrostatic interactions, other methods (such as Ewald summation, see below) must be employed.
This is because electrostatic interactions are typically much longer-range than dispersion and repulsion interactions, and truncation through the use
of simple switching functions could result in a serious decrease in accuracy of results. These interactions are stronger than dispersion or repulsion,
and must be treated more carefully.

Ewald Decomposition
While truncation of the Lennard-Jones potential for dispersion and repulsion interactions is made relatively simple through the use of switching
functions and non-bonded cutoffs, dealing with long-range electrostatic interactions is less straightforward, and requires a different approach.
Evaluation of these interactions is commonly performed using the Ewald method, which divides electrostatic interactions into short-range and long-
range components. The short-range component quickly decays to zero in real space, thus it can be truncated using a switching function, as was the
case for the Lennard-Jones potential. The long-range component is more difficult to deal with, though it can be calculated efficiently in reciprocal
space. In the Ewald summation method, the Gauss error function (erf) and complementary error function (erfc) are employed.[1] These functions
can be seen as

and

with and . These functions are thus complementary. Also, note that

and
thus, by definition,

In using these Gauss functions, the term of the Coulombic interaction potential is broken down into the
aforementioned short- and long-range terms,

The Ewald decomposition divides the


Coulombic interaction into a short-
where is a tunable constant chosen to ensure the electrostatic interaction decays to zero at the desired
range real space component and a
value. In this case, the term corresponds to the short-range interaction potential, which decays long range reciprocal space
component by factoring it with the
erfc(r) and erf(r) identity. The real-
quickly to zero through evaluation in real space, while the term corresponds to the long-range
space erfc(r)/r component converges
interaction potential, which is more easily evaluated in reciprocal space.[2] This potential is evaluated as a to zero at short distances, avoiding
sum of all possible charge-charge interactions across the infinite periodic images of the simulation cell. In truncation error.
reciprocal space, these interactions converge quickly, allowing efficient computation through application of
fast Fourier transforms.[1] The sum of all charge-charge interactions at long range throughout all periodic
images of the simulation system can be represented as

where is the distance vector between interacting charges and and is the translational vector between different periodic cell images.
Equivalently, this sum can be written as a Fourier series,[1]

where is a reciprocal space expansion coefficient and is a reciprocal space vector. Note that while this sum is still infinite, quickly as
, allowing efficient computation of long-range electrostatic interactions through application of fast Fourier transforms.

Particle Mesh Ewald


The particle mesh Ewald method is a numerical approximation to Ewald summation.[3] It involves a "smearing" of charges across a finite set of
lattice points, located on a grid (or "mesh") that is placed over the simulation cell. By localizing charge density on this grid, the overall charge
distribution across the simulation cell and its infinite periodic images is reduced to a more manageable form. This makes calculation of long-range
electrostatic interactions significantly easier and less expensive, particularly in systems with a large simulation cell length, and thus, a large number
of vectors. From the lattice of "smeared" charge densities, long-range interaction potentials can be calculated using a fast Fourier transform. While
this approximation decreases the computational cost significantly, accuracy of collected data is still high. The particle mesh Ewald method is also
relatively simple to integrate into a simulation.[4] These factors combine to make it an extremely attractive method to calculate long-range
electrostatic interaction potentials.

See Also
Wikipedia article on Ewald summation
Monte Carlo Methods
Monte Carlo Methods are stochastic techniques that use random numbers to sample conformation space. In this method, a random conformation is
generated then it would be determined whether to reject or accept it.

A Monte Carlo simulation starts from a given conformation, then random numbers will generate a new trial
conformation. This trial conformation will be determined whether to be accepted or rejected. If it is
accepted, this conformation will become the current conformation and the next iteration will start from this
conformation.

Monte Carlo Integration


Computing a definite integral is a simple illustrative example of using Monte Carlo method. Consider the
simple function and its definite integral from 0 to 1:
A Metropolis Monte Carlo simulation
of Liquid Argon at 98 K

In this case, the conformation space is two dimensional and random x and y values will be generated to
make the new trial conformation. Only the trial points (a pair of x and y values) that are below the graph
can be accepted. The number of accepted conformations divided by the total number of samples represents
the probability for samples to be accepted. Multiplying this probability by the volume of the sampling space
( ) is an approximation of the correct value of the integral.

A large number of samples will result a closer value to the expected value.
Using Monte Carlo method for

evaluating . The random

samples will be accepted with a


This method is usually used for higher-dimensional integrals. probability of

Metropolis Monte Carlo Simulation


In molecular simulations, Metropolis Monte Carlo Algorithm is usually used to accept or reject the trial
conformation. If the energy of the trial conformation is lower than or equal to the current energy, it will
always be accepted. If the energy of the trial conformation is higher than that of the current energy, then it
will be accepted with a probability determined by the Boltzmann distribution.

It means that, if , a random number ( ) will determine whether to accept or reject this The schematic of the Metropolis
conformation. Monte Carlo Algorithm. Moves that
decrease the potential energy of the
system ( ) are always
accepted. Moves that increase the
potential energy ( ) are
accepted with a probability
Metropolis algorithm satisfies microscopic reversibility, meaning that the probability of being in a state and determined by the Boltzmann
going to the next state is the same as being the new state and going back to the previous state. distribution.

Sampling

Configurational integral requires integration of Boltzmann distribution over all spatial variables. To employ Monte Carlo integration to estimate
configurational integral (Z), a large number of configurations are randomly generated and Boltzmann weight is evaluated only at these points. With
enough samples, configurational integral can be approximated.
But direct Monte Carlo sampling is extremely inefficient because a huge number of states must be sampled.
Besides, there is an equal probability to sample high energy states as low energy states, but by Boltzmann
distribution system is likely to be in a low energy state.

Metropolis algorithm changes the acceptance probability, so that the system samples low energy states
preferentially.

Limitations of Monte Carlo Simulations

With Monte Carlo simulations, only an equilibrium distribution of states can be sampled. Moves are random
and non-physical, so trajectories only corresponds to sequence of Monte Carlo moves and there is no
temporal information, so dynamical properties like diffusion coefficients cannot be calculated from a Monte
At higher temperatures, high energy
Carlo trajectory.
increases are more likely to be
accepted. Conversely, at lower
temperatures there is a greater
Equilibration
probability of rejecting the state.

Metropolis Monte Carlo cycle favor transitions to lower energy and reach more probable structures. Many
iterations of the metropolis Monte Carlo will move the system to an equilibrium state. Multiple properties
can be plotted to determine when equilibration is complete.

Moving to equilibrium can be seen in a slow drift, equilibration is not complete until drift has stopped and properties fluctuate around average value.

Equilibrium properties

At equilibrium, the system is not undergoing a systematic change. There is no large scale gradient in physical properties of matter.
If small
perturbation is made to the system, the system will return to its equilibrium state. In equilibrium there is no momentum gradient, concentration
gradient and temperature gradient in the system.

See Also
Molecular Dynamics
Molecular dynamics simulation is a tool of simulating motions of the atoms of a many-body system.
Molecular dynamics is used to compute the equilibrium and transport properties (viscosity, thermal
conductivity, diffusion, reaction rate, protein folding time, structure and surface coating) of a classical
system. To simulate the dynamics of molecules, classical Newtonian mechanics can be used.
In practice,
molecular dynamics simulations are performed by moving in small increments of time. It is the solution of
the classical equations of motion for atoms and molecules to obtain the evolution of a system. Because
molecular systems typically consist of a vast number of particles, it is impossible to determine the properties
of such complex systems analytically, so MD is applied to these systems to solve this problem by using
numerical methods.

Verlet Algorithm
Molecular Dynamics Simulation of
One popular algorithm that predict positions of atoms at a later point in time is Verlet algorithm.[5] This Argon
algorithm approximates the position of particles at time (one time step in future from time t) by
using a Taylor series approximation.

A Taylor series approximates a function at point x based on the values and derivatives of that function at another point a.

Derivation of the Verlet Algorithm

A Taylor series approximates an equation for the position of the molecule in .

First and second derivation of position correspond to velocity and acceleration, and acceleration is related to force through Newton’s second law.

Considering one time step backward in time:

Taking sum of positions at previous and next step:

Rearranges to:

The Verlet Algorithm


Derivation of the Velocity Verlet Algorithm

An alternative form of Verlet equation that involves velocities.


By using Taylor series for later time step from current time:

Using current time step from the next time step:

By substitution:

The Velocity Verlet Algorithm

See Also
Wikipedia:Verlet integration
Chemwiki: Lennard-Jones (http://chemwiki.ucdavis.edu/Physical_Chemistry/Physical_Properties_of_Matter/Intermole
cular_Forces/Lennard-Jones_Potential)

Time step
The time step for the most of molecular dynamics simulations is on the femtosecond scale which is the scale of chemical bond vibrations. Molecular
dynamics simulations are limited by the highest frequency vibration and normally the time step should be ten times lower that the highest frequency.
Generally, molecular dynamics simulations of organic molecules use a timestep of 1-2 fs ( s) because of the fast vibrations of bonds
containing hydrogen.

The first problem is that some processes, such as protein folding, require relatively long simulation times to observe in their entirety. To observe
these processes in an MD simulation we need at least simulation with microseconds length.

This is a very big number of time steps to do, even for a computer. To solve this problem some constraints are applied. Many molecular dynamics
simulations constrain all hydrogen containing bonds to be at equilibrium value. Because hydrogen is a very light atom and has very fast bond
vibrations. By constraining these hydrogen containing bonds to be fixed at equilibration value, fast vibrations will be neglected.

Another problem is that the vibrational energy levels of the vibration of covalent bonds have large spaces between quantum energy levels so it is
poorly described using classical mechanics, it is better to assume that bonds are rigid.

Time reversibility
An isolated system that is propagating through Newton's laws of motion should be time-reversible. This is to say, if you. In many cases molecular
dynamics algorithms are time reversible. It means that by running the trajectory forward and then backward by the same number of steps, get back
to the starting point by the same sequence of steps. So two MD simulation with the same initial conditions and time steps will follow exactly the
same trajectories.

A second consequence of this is that the system is said to be deterministic. If you know the positions and velocities of the particles at a given point
of time, and can calculate the forces acting on them in any position, in principle, you could integrate the laws of motion to determine the state of the
system at any point in the future. In this way, we can determine the state of the system (i.e., positions and velocities) in the future based on the state
we are now.

Sampling Thermodynamic Ensembles Using Molecular Dynamics

Microcanonical ensemble (NVE)


A molecular dynamics simulation with an integrator that approximates Newtonian dynamics (e.g., the Verlet
algorithm) will conserve the total energy of the system (i.e., the sum of kinetic and potential energies
remains constant at each time step. As a result, these simulations will sample the microcanonical ensemble
(NVE), because the total energy of the system (E) is conserved.

The Verlet equations and all other integrators have some error associated with the truncation of higher order
terms in the Taylor series expansion. This error is more significant when larger time steps are used. Further,
there is numerical error that results from the rounding error by performing these calculations on finite-
precision computers (e.g., 32 or 64 bit registers). As a result, the total energy of a molecular dynamics
simulation can drift from its initial value over the course of a simulation.

Sampling Different Ensembles


Conservation of the total kinetic and
potential energy in a molecular
To sample a different ensemble, the equations of motion must be modified. This is commonly described as
dynamics simulation of liquid argon
performing a molecular dynamics simulation with a thermostat or barostat.
using the Verlet equation

Canonical ensemble (NVT)


In the canonical ensemble, amount of substance (N), volume (V) and temperature (T) are constant. In this ensemble, the energy of endothermic and
exothermic processes is exchanged with a thermostat. Several methods have been developed that modify the equations of motion of a molecular
dynamics simulation so that they sample the canonical ensemble, such as the wikipedia::Andersen thermostat and wikipedia::Nosé–Hoover
thermostat.

Isothermal–isobaric ensemble (NPT)


In the isothermal–isobaric ensemble, amount of substance (N), pressure (P) and temperature (T) are conserved. Configuration of particles are
Boltzmann-weighted this ensemble. Also, a barostat is needed in this ensemble.

Visualizing the MD Simulations


Time evolution of positions and velocities of the particles of the system is called a trajectory. This can be
rendered into an animation that shows the motion of the particles over the course of the simulation. VMD (h
ttp://www.ks.uiuc.edu/Research/vmd/) is a popular program to visualize molecular dynamics simulations.

See Also
Wikipedia:Molecular Dynamics

Tuckerman Notes: The Hamiltonian formulation of classical mechanics (http://www.nyu.ed


u/classes/tuckerman/stat.mech/lectures/lecture_1/node4.html)

Literature
1. Tuckerman, M. E. (2009). Statistical Mechanics: Theory and Molecular Simulation. Oxford
University Press. Molecular Dynamics Simulation of
2. Allen, M. P.; Tildesley, D. J. (1987). Computer Simulation of Liquids. Clarendon Press. POPC bilayer
3. Herce, H. D.; Garcia, A. E.; Darden, T. (28 March 2007). "The electrostatic surface term: (I)
Periodic systems". The Journal of Chemical Physics 126 (12): 124106.
doi:10.1063/1.2714527. PMID 17411107. Bibcode: 2007JChPh.126l4106H.
4. Darden, T.; York, D.; Pederson, L. (15 June 1993). "Particle mesh Ewald: An N·log(N) method for Ewald sums in large systems".
The Journal of Chemical Physics 98 (12): 10089. doi:10.1063/1.464397.
5. Verlet, Loup (1967). "Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard−Jones
Molecules". Physical Review 159: 98–103. doi:10.1103/PhysRev.159.98. http://link.aps.org/doi/10.1103/PhysRev.159.98.
Molecular Dynamics of the Canonical and Isothermal-Isobaric
Ensembles
The Canonical and Isothermal-Isobaric Ensembles
An ensemble is a representation of various stated of the system in the thermodynamic equilibrium. The
constraints operating in the system determined the type of ensemble.

A canonical ensemble represents the possible states of a system characterized by constant values of N, V,
and T (constant volume and temperature). The energy of the microstates can fluctuate, giving a distribution
of energies. In this ensemble, the system is in contact with a heat bath at a fixed temperature.

The isothermal–isobaric ensemble is a collection of systems characterized by the same values of N, P and T
Distribution of the potential energy of
(constant temperature and constant pressure ensemble). This ensemble allows the volume and the energy to
a molecular simulation in the
fluctuate, giving a distribution of energies and volume.This ensemble has Boltzmann-weighted
canonical (NVT) ensemble.
configurations pressure of p, surrounded by a heat bath at temperature T.

Thermostats/ Barostats
A thermostat is a modification of the equation of motion to generate a statistical ensemble at a constant
temperature. The most used thermostats in molecular dynamics are the Langevin, Anderson, and Nosé–
Hoover Thermostat.

The Langevin thermostat is a Stochastic thermostat that applies friction and random force to momenta.

The Andersen thermostat assigns velocity of random particle to new velocity from Maxwellian distribution.
In this thermostat the system is couple to a heat bath to impose the desired temperature. The equations of
motion are Hamiltonian with stochastic collision term.

In this stochastic thermostat, dynamics are not physical, for this is not more time-reversible or deterministic. Isothermal-Isobaric Molecular
Dynamics Simulation of Water
The Nose-Hoover thermostat allows to simulate a system which is in the NVT ensemble. The idea is to
introduce a fictitious degree of freedom. This approach couples the dynamics to the heat bath through the
system Hamiltonian. The Nose equation is reversible and deterministic, and able to represent the distribution sample equivalent to a canonical
ensemble.

In barostats, similar to the temperature coupling, an additional term is added to the equations of motion that effect a pressure change. Two of the
must used barostat are the Anderson thermostat and the Parrinello-Rahman barostat.

Nose-Hoover thermostat derivation


In the Nose-Hoover thermostat the Hamiltonian have a fictitious degree of freedom for heat bath:

Where:

: is the momentum of the degree of freedom.

Q: is the effective mass

s: extended variable.

:is chosen to be the potential energy of the degree of freedom.

Equations of motion follow from Hamilton's equations.

Velocities of the particles

velocity of the "agent"


Acceleration of the particle

Acceleration on the agent


Umbrella Sampling
Umbrella sampling is a sampling method used in computational physics and chemistry. This sampling can sample the rare states which normal
molecular dynamic sampling ignored. Therefore, umbrella sampling can improve free energy calculation when a system is undergoing a systematic
change.

Biased molecular dynamics simulations


Normal MD simulations samples system in equilibrium. In an MD simulation of the time series of the C-C-C-C dihedral angle of n-butane(aq), only
gauche states and trans states are sampled. Because this simulation is only performed in 2 ns, states with high free energy (e.g. cis state) are less
likely to happen. These configurations are ignored and it is impossible to calculate the free energy of these states from this simulation. An artificial
bias potential is needed in this case to help the molecule cross the energy barrier. With bias potential, rare states can be effectively sampled.

In this case, a harmonic bias potential is needed to counteract the dihedral barrier.

High free energy states were captured by biased simulation. In order to calculate the free energy profile of
these states, biased probability distribution has to be converted to an unbiased probability distribution.

Acquire free energy profile from biased simulations


The potential energy includes the bias potential at the reaction coordinate is

The time series of the C-C-C-C


The probability distribution of this potential is dihedral angle of n-butane(aq) from a
2 ns molecular dynamics simulation.

The probability distribution of unbiased potential is

From this equation, we can derive,

The time series of the C-C-C-C


dihedral angle of n-butane(aq) from a
2 ns molecular dynamics simulation
Free energy profile can be calculated from probability distribution by, with a bias potential to stabilize the
gauche conformations.

Using this relation, the PMF of the biased simulation can be converted to unbiased PMF by:

term is denoted as . It is generally a constant and in some cases does not

affect the relative energy and no needed to calculate. It can be calculated by [1]:

A isothermal-isobaric molecular
This method provides free energy profile of all possible states. In umbrella sampling of n-butane(aq), the
dynamics simulation of butane in
chosen bias potential covered all reaction coordinates. General cases are more complex, which leads to a liquid water. A bias potential is
more complex determination of bias potential. applied to the C-C-C-C dihedral angle
to facilitate sampling of the butane
Choice of Bias Potential rotational barrier.
The previous section discussed the biased molecular dynamic simulation of n-butane(aq). The reaction
coordinate is one-dimensional and periodic, and the bias potential was chosen to be the negative the
dihedral potential of n-butane[2]. The optimum bias potential is the opposite of the free energy [1].

However, is unknown for most cases. For general cases, the bias potential needs to be adjusted along
the reaction coordinate. Thus, a harmonic bias potential restrained on a reference point with respect to a
window on the reaction coordinate is introduced[2]:

Therefore, a full umbrella sampling can be obtained by a series of biased MD simulation on different
reference points on the reaction coordinate.
A isothermal-isobaric molecular
dynamics simulation of butane in
Calculation of the Potential of Mean Force from Umbrella liquid water
Sampling Data
File:Umbrella sampling PMF of
The Weighted Histogram Analysis Method (WHAM)[3] transferred a series of biased distribution functions to a n-butane.png
single distribution function. The value of needs to be estimated to give the correct value of :
The potential of mean force
for the C-C-C-C dihedral angle
of n-butane calculated from a
The true distribution P(s) is the weight average of each step[1]: molecular dynamics
simulation with a bias
potential. The PMF from
biased probability distribution
is plotted in red. The bias
potential is plotted in blue.

And , where is the total number of steps sampled for window [3].

Combined with , both and can be obtained.

The other way to analyze umbrella sampling is Umbrella Integration, see[1].

See also
Wikipedia:Umbrella sampling

For more information about umbrella sampling, see[4]

References
1. Kästner, Johannes (2011). "Umbrella sampling". Wiley Interdisciplinary Reviews: Computational Molecular Science 1 (6): 932–
942.
2. Invalid <ref> tag; no text was provided for refs named r1
3. Kumar, S; Rosenberg, JM; Bouzida, D; Swendsen, RH (1992). "The weighted histogram analysis method for free‐energy
calculations on biomolecules. I. The method". Journal of computational chemistry 13 (8): 1011–1021.
4. Torrie, GM; Valleau, JP (February 1977). "Nonphysical sampling distributions in Monte Carlo free-energy estimation: Umbrella
sampling". Journal of Computational Physics 23 (2): 187–199. doi:10.1016/0021-9991(77)90121-8.
Free Energy Methods
Free energy perturbation and thermodynamic integration are molecular simulation methods to calculate the relative free energies (i.e., Gibbs or
Helmholtz) of two states.
Replica Exchange Molecular Dynamics
Replica Exchange molecular dynamics involves performing simultaneous simulations on many ( ) copies
of a system with a different value for a certain control variable. Parallel tempering is a replica exchange
method in which temperature is used as the control parameter. A set of temperatures are assigned to the
copies of a system satisfying . The probability distribution for an independent
system depends on the configuration of all the atoms within a system and
represents the complete set of configurations for copies of the system. Replica exchange simulations are
performed on canonical ensemble (NVT) systems. An exchange of coordinates between a neighbouring
pair of replica is attempted, and the attempted move is accepted or rejected based on Metropolis Monte A replica exchange molecular
Carlo criterion[1]. dynamics simulation of the folding of
a peptide. The lowest temperature
A schematic of a replica exchange simulation carried out on 4 replicas at different temperatures is shown. replica is labeled in bold].
The m steps of MD are done on each individual replica between exchange attempts. The first attempted
exchange between replicas 3 and 4 is made, , and the attempted exchange is
accepted, and .The attempted exchange between replicas 1 and 2 is rejected,
therefore, where and .

Advantages
A replica exchange molecular
dynamics simulation where there are
The aim of replica exchange molecular dynamics is to ensure that high temperature configurations can cross
4 replicas at different temperatures.
barriers easily on the potential energy surface. The low temperature replicas sample the lowest energy
m steps of molecular dynamics are
minima, while the high temperature replicas are capable of sampling the entire surface, making lower completed between exchange
temperature configurations accessible at higher temperatures. Furthermore, conventional molecular attempts.
dynamics or Monte Carlo methods show extremely slow convergence when calculating equilibrium
properties, therefore, it takes a long time for a configuration to escape a local minimum because the
probability of crossing an energy barrier of height is proportional to . Replica exchange methods improve sampling efficiency by

making it easier to cross energy barriers.

The Acceptance Probability

Simulation of two NVT systems where the second simulation is thermostated at a higher temperature, .

The probability distribution of System 1:

represents the coordinates of all atoms in system 1.

The probability distribution of System 2:

represents the coordinates of all atoms in system 2.

The probability of System 1 having coordinates and System 2 having coordinates :

The probability of System 1 having coordinates and System 2 having coordinates :

The probability of accepting the attempted exchange of coordinates between System 1 and 2 becomes:
{\displaystyle \Delta
{\mathcal {V}}=
{\mathcal {V}}(\mathbf
{R} _{2})-{\mathcal
{V}}(\mathbf {R}
Letting _{1})}

Where and .

See Also
Coarse grain models
Coarse-Grain (CG) model is a molecular simulation model where atoms are transformed into a smaller number of chemical sites called beads. This
model aims to reduce the complexity of the problem while keeping essential interactions for concerned properties. The reduction of the number of
particles eliminates irrelevant vibrational mode, thus reduce the computational resource requirement of the molecular simulations.

The Mathematical Representation


Let A is an n-by-3 matrix where each row denotes the 3D Cartesian coordinates of the atoms. In most of
CG models, the CG map f is a linear transformation of matrix to a new m-by-3 matrix (with m<n) by
a m-by-n matrix :

The all atom and MARTINI coarse


grain representations of a POPC lipid
bilayer.

 
 

   
(1)

with M is the weights matrix.

In general, Noid et al [2] suggest 5 conditions to ensure that a CG model is consistent with the all-atom (AA) model:

1. The relationship between AA coordinates and CG coordinates is linear, or it follows equation (1)
2. For each CG bead, there is one AA atom that belongs that CG bead only
3. The force in CG simulation satisfies the mean force of AA force field [2]

 
 

   
(2)

1. No atom is involved in the definition of more than one CG site


2. The CG mass is the weighted harmonic mean of corresponding atomic mass [2]

 
 

   
(3)

Common Coarse-Grain Model


There are two approaches to creating a CG model. The first approach, named "bottom-up," starts from the AA structure, or united-atom (UA)
structure (structure with all hydrogen implicitly attached to the heavy atom). The CG model substitutes groups of atoms into predefined CG beads
so that the new model recreates the structure and the interaction energy of the original model. On the other hand, the "top-down" approach starts
from the structures of macromolecules obtained from experimental data. One can define the mapping and interaction between beads to stabilize
these empirical structures.

There are several well-known CG model seen in literature, including:

Elastic Network Model: the amino acid in this model is represented by one bead in CG model. The interaction between the bead
in this model is simple with the aim to recreate the native structure of protein obtained from experience.
Go-like model: similar to elastic network model, the Go-like model uses one bead for each amino acid. However, many non-bond
interactions are calculated and parameterized in this model, leading to the ability of describe the folding protein. [3]
Direct Boltzmann inversion: having the energy form of bond, non-bond potential form of molecular dynamics, the method
optimizes the parameters to mimic the atomistic distributions. In order words, the method reproduces the atomistic pair potential
of mean force of the AA model to CG model.

The MARTINI Model


MARTINI model is a CG model introduced by Marrink and Tieleman [4]. The model aims to provide "a broader range of applications without the
need to reparametrize the model each time." MARTINI is used to simulate the lipid system, peptide-membrane binding, or protein-protein
recognition.

The bead in MARTINI usually contains four heavy atoms (excluding hydrogen). Unlike other CG models, MARTINI contains only a few CG
types based on the charge and polarizability of the beads instead of the atom compositions. For the latest version, the model has four major
interaction sites, including P for polar, N for non-polar, C for apolar, and Q for charged. Then, these types are expanded into 18 subtypes to
describe detailed interactions.

See also
Wikipedia:Force_field_(chemistry)#Coarse-grained_force_fields
Wikipedia:MARTINI

References
1. Tuckerman, M. E. Monte Carlo.Statistical mechanics: Theory and molecular simulation; Oxford University Press Inc.: New York,
2010; pp. 300-304.
2. Noid, W.G.; Chu,Jhih-Wei; Ayton,Gary S.; Krishna,Vinod; Izvekov,Sergei; Voth,Gregory A.; Das,Avisek; Andersen,Hans C.
(2008). "The multiscale coarse-graining method. I. A rigorous bridge between atomistic and coarse-grained models". The
Journal of Chemical Physics 128 (24): 244144. doi:10.1063/1.2938860.
3. Tozzini, Valentina (April 2005). "Coarse-grained models for proteins". Current Opinion in Structural Biology 15 (2): 144-150.
doi:10.1016/j.sbi.2005.02.005.
4. Marrink, Siewert J.; de Vries, Alex H.; Mark, Alan E. (1 January 2004). "Coarse Grained Model for Semiquantitative Lipid
Simulations". The Journal of Physical Chemistry B 108 (2): 750–760. doi:10.1021/jp036508g.
Polarizable force fields
Polarizability
When a molecule is placed in various environment, the charge distribution in a molecule changes. This is what defines Polarizability. Force field
plays a key role in molecular dynamics so that if these force fields are more accurate, they are able to describe physical reality. Consequently, the
output which obtains from the simulation will be more precise. So, this is crucial to have a good approximation for magnitude of the partial charge,
and its position at the nucleus. On the other hand, non-polarizable (fixed charge) models offer poor explanation or effective potential with
approximations that cannot fully capture many-body effects.In other words, fixed charge models offer tractable descriptions and are fit for
equilibrium properties for homogeneous systems.In contrast, since polarizable empirical force fields consist of many-body effects, it provides a clear
and systematic development in functional form.

Drude Model (also known as Charges on Springs)


The Drude polarizable force field approximates the effect of induced polarization by adding negatively charged "Drude" particles to the system.
These particles are tethered to the non-hydrogen atoms of the system with a harmonic potential. Electrostatic interactions with other charged
particles cause the Drude particles to move in response, mimicking the effect of induced electron polarization. In this theory, Lennard-Jones
potential is used to demonstrate dispersion interactions. Some positive features for Drude model are that it is relatively easy to implement within
existing force fields, and also it is chemically instinctive.The serious of its weakness is that by using many extra charges, the number of interaction
calculations increases. Atomic polarizability, , of a given atom can be obtained from equation (1) where is the charge on a Drude particle and
is the force constant for the spring that connects this Drude particle to its parent nucleus.

(1)

AMOEBA Model (Atomic Multipole Optimized Energetics for


Biomolecular Applications)
AMOEBA is one of the various force fields with the ability to describe electronic polarization. AMOEBA
treats electrostatics using a combination of fixed and polarizable multipoles. In AMOEBA, the polarizable
force field are modeled by an interactive atomic induced dipole. The induced dipole at each atom is the
product of its atomic polarizability and the electrostatic field at this atom produced by permanent multipoles
and induced dipoles of all other atoms. As a consequence, this future results in calculating of electrostatic
interactions differ from the previous one which used a Coulomb potential to evaluate the charge–charge
interactions. So, in AMOEBA should be added extra terms into the force field to account for the charge–
dipole and dipole–dipole interactions. One advantage of the AMOEBA model is its emphasis on replication
of molecular polarizability and electrostatic potentials, instead of just interaction energies.

Difference between AMOEBA and fixed-charge force fields:


The AMOEBA potential has static atomic
Schematic of the SWM4-NDP Drude
charges, point dipole moments, and point quadrupole moments centered on each atom. Each atom has a polarizable water molecule. The
polarizable point-dipole moment is also included on each atomic center. The degree of polarizability is Drude particle and its harmonic
defined by an atomic polarizability parameter. The most stable direction and magnitude of these dipoles are tether is shown in red. The lone-pair
calculated iteratively. particle (LP) is on the H-O-H
bisector.
References
Baker, Christopher M. (2015). "Polarizable Force Fields for Molecular Dynamics Simulations of
Biomolecules". Comput.Mol.Sci. https://onlinelibrary.wiley.com/doi/abs/10.1002/wcms.1215.

"Current Status of the AMOEBA Polarizable Force Field". J.Phys.Chem 114,8: 2549–2564. 2019.
doi:doi.org/10.1021/jp910674d. https://pubs.acs.org/doi/10.1021/jp910674d.

A SWM4-NDP Drude polarizable


model water molecule is polarized by
the electric field created by a
neighbouring potassium ion. The
Drude particle (red) is negatively
charged and moves to maximize its
interaction with the potassium.
Radial Distribution Functions
The radial distribution function (RDF) denoted in equations by g(r) defines the probability of finding a
particle at a distance r from another tagged particle. The RDF is strongly dependent on the type of matter so
will vary greatly for solids, gases, and liquids. The average density at any point in a liquid is referred to as
the bulk density, ρ. This density is always the same for a given liquid. The density of the liquid at a given
distance of r from another molecule is referred to as the local density, ρ(r), and is dependent on the structure
of the liquids.

The radial distribution function can be evaluated as follows:

computing g(r) The radial distribution function (rdf)


defines the probability of finding a
particle at distance r from another
tagged particle. Here, the distance r
is between the oxygen atoms of two
water molecules.
where is a function that computes the number of particles within a shell of thickness .

The correlation function, g(r), relates the bulk density to the local density. The local density can be
calculated as follows:

Calculating local density

Radial Distribution Function: Solids The radial distribution functions of


solid (T = 50 K), liquid (T = 80 K),
and gaseous argon (T = 300 K). The
Solids have regular, periodic structures, with molecules fluctuating near their lattice positions. The structure
radii are given in reduced units of the
is very specific over a long-range, therefore it is rare to see defects in solids. Discrete peaks at values of σ,
molecular diameter ( Å ).
σ, σ, etc, can be seen in the RDF of a solid. Each peak has a broadened shape which is caused by
particles vibrating around their lattice sites. There is zero probability of finding a particle in regions between
these peaks as all molecules are packed regularly to fill the space most efficiently. Each peak represents a
coordination shell for the solid, with the nearest neighbours being found in the first coordination shell, the
second nearest neighbours being found in the second, and so on.

Radial Distribution Function: Gases


Gases do not have a regular structure which heavily influences their RDF. The RDF of a real gas will have
only a single coordination sphere which will rapidly decay to the normal bulk density of a gas, g(r)=1. The
RDF of a real gas is fairly simplistic, with values of g(r) as follows:
The radial distribution function of
solid argon at 50 K calculated used a
when
molecular dynamics simulation. The
in the area where London Forces are strongest when argon-argon interactions are
when described using a Lennard-Jones
potential.

Radial Distribution Function: Liquids


Liquids follow the hard-sphere model of repulsion indicating that there is zero density when atoms overlap. Due to their ability to move
dynamically liquids do not maintain a constant structure and lose all of their long-range structure. The first coordination sphere for a liquid will
occur at ~σ. At large values of r, the molecules become independent of each other, and the distribution returns to the bulk density (g(r)=1). The first
peak will be the sharpest and indicates the first coordination sphere of the liquid. Subsequent peaks will occur roughly in intervals of σ but be much
smaller than the first. Liquids are more loosely packed than solids and therefore do not have exact intervals. Although it is possible for there to be
multiple coordination spheres, there is a depleted probability of finding particles outside the first sphere due to repulsion caused by the first sphere.

Coordination Numbers
The coordination number indicates how many molecules are found in the range of each coordination sphere. Integrating g(r) in spherical
coordinates to the first minimum of the RDF will give the coordination number of a molecule.

Calculating the coordination number


A coordination number of 12 reflects the optimal packing of hard spheres. Due to the fact that attractive
forces are weak and isotropic and that repulsive forces are very strong and short-range many molecules can
be approximated as "hard spheres". This indicates that simple liquids, a liquid that does not experience
hydrogen bonding or electrostatic forces, will pack in the most efficient way and will avoid repulsive
interactions. The most efficient way for a fluid of spherical particles to fill a volume is to have 12 neighbors
to every molecule. This explains why many simple liquids, such as argon, have a coordination number of
12.

Liquids which have hydrogen bonding and electrostatic interactions, such as water, will have a much lower The radial distribution function of
coordination number (water has a coordination number of 4-5 in the first sphere). This is influenced by the liquid argon at A molecular dynamics
fact that these more complex liquids will try to maximize their hydrogen-bonding interactions in the first simulation of liquid argon at 91.8 K
sphere. Thus results in more energetic but less efficient packing. The RDF of these liquids will have a and 1.8 atm pressure under periodic
notably sharper peak for the first coordination sphere which will occur earlier than that of a simple liquid. boundary conditions.

See Also
Wikipedia:Radial distribution function
SklogWiki: Radial distribution function (http://www.sklogwiki.org/SklogWiki/index.php/Radi
al_distribution_function)

References
A Molecular Dynamics Simulation of
Chandler, David. Introduction to Modern Statistical Mechanics. Oxford University Press. Liquid Water at 298 K
ISBN 9780195042771.

The radial distribution functions of


argon and water, calculated from
molecular simulation. The argon rdf
shows regular and definite
coordination spheres. The water rdf
is dominated by the first coordination
sphere. Secondary coordination
spheres are much less ordered than
in simple liquids.

The first coordination sphere of water


indicated on the O-O rdf.
The "kissing number" problem in 3
dimensions is the number of non-
overlapping spheres that can be in
contact with a central sphere. This
simple model correctly predicts the
coordination number of a molecule of
a simple fluid should be 12.

Dielectric constants
The dielectric constant of a liquid can be calculated from the magnitude of the variance in the net dipole moment of a cubic simulation cell, M,

where V is the volume of the simulation cell. is the infinite frequency dielectric constant. For non-polarizable models, this term is equal to 1.
Diffusion Coefficients
Einstein Equation

Green-Kubo Theory
The diffusion coefficient of a liquid can be calculated from the velocity autocorrelation function of an MD
simulation in the canonical ensemble,

where v(t) is the velocity of the particle at time t. Note that care must be taken in using an MD simulation
with a stochastic thermostat, which introduce an external source of friction and random force.

See Also The mean square displacement as a


function of time of an oxygen
Wikipedia:Einstein relation (kinetic theory) molecule diffusing water and
Wikipedia:Diffusion pentane. The plots were calculated
from a molecular dynamics
simulation of a periodic simulation
cell containing the oxygen molecule
and the solvent. These plots can be
used to calculate the diffusion
coefficient of the solute using the
Einstein relation.

A molecular dynamics simulation of


liquid argon in a periodic simulation
cell where the particle positions are
not "wrapped" to be placed in the
opposing face of the cell when they
cross the boundary. This is used to
calculate diffusion coefficients using
the Einstein equation.
Interfacial properties
Surface Tension

The surface tension (γ) of a liquid is defined as,


,
where G is the Gibbs energy and A is the surface area of the liquid. At

equilibrium, the surface tension of liquid is positive, indicating that the Gibbs energy of the system would increase if the surface area were
increased. The SI unit for surface tension is N/m, but dyn/cm is sometimes reported instead.

Physically, this reflects that molecules at the liquid-vapor interface have fewer neighbors than molecules in the bulk liquid. This generally means
that the intermolecular interactions the molecule engages in are weaker. Increasing the surface area would result in more molecules being at the
interface, so this is thermodynamically disfavored. The surface tension is an indicator of the degree to which it is disfavored. This effect is
particularly strong in associated liquids like water, which becomes more ordered and has fewer hydrogen bonds at the water-vapor interface.

The surface tension of a liquid can be calculated in a molecular simulation by measuring differences in the averages of the pressure tensor for an
NVT simulation where the simulation contains a slab of the liquid surrounded by vapor layers along the z axis,

Surfactants

See Also
Wikipedia:Surface_tension
Transport properties
Viscosity
The viscosity of a liquid is defined by the drag force experienced by a moving plate separated from a stationary plate by a layer of the liquid.

u is the speed of the moving plate. h is distance by which the two plates are separated. A is the surface area of the moving plate.

can be calculated from a molecular dynamics simulation using the Green–Kubo relation,

Pxy is the xy component of the pressure tensor. The xz or yx off-diagonal components can be used instead.

Thermal Conductivity
The thermal conductivity of a liquid is defined by the rate of heat transfer between an upper and lower plate at different temperatures (T1 and T2).

The Green-Kubo relation for thermal conductivity relates lambda to the autocorrelation function of the heat current,

See Also
Wikipedia:Viscosity
Wikipedia:Green–Kubo_relations
Solids
The Structure and Dynamics of Simple Solids
Solids have regular periodic structures where the atoms are held at fixed lattice points in the 3-D rigid
framework. This regular repeating structure can take different forms and is usually represented by a unit
cell, where the entire solid structure is the repeated translations of this cell. Since the atoms cannot move
freely, this allows solids to have a fixed volume and shape, which is only distorted by an applied force.
However, atoms in solids do still have motion. They vibrate around their lattice positions and therefore their
position fluctuates slightly around the lattice point. This can be thought of as the atom being tethered to the
point, but being able to slightly vibrate around it. The intermolecular interactions between the atoms keep
them in their fixed positions. These forces depend on the composition of the solid and could include forces
such as London dispersion, dipole-dipole, quadrupole-quadrupole, and hydrogen bonding. [1] The
temperatures at which these solids occur are low enough that the atoms do not have enough energy to Figure 1: A molecular dynamics
overcome these forces and move away from their fixed position. This keeps the solid tightly packed and simulation of solid argon at 50 K.
eliminates most of the space between atoms or molecules. This gives solids their dense property. Figure 1 Argon-argon interactions are
shows a molecular dynamics simulation of solid argon at 50 K, where the atoms are all vibrating around described using a Lennard-Jones
their fixed lattice points. Argon atoms only have london dispersion intermolecular forces, which are potential. The atoms are arranged in
described by the Lennard-Jones potential. The Lennard-Jones equation below accounts for the London a face-centered cubic cell.
dispersion attractive forces and the Pauli repulsion forces between atoms.

The intermolecular distance between the argon atoms is equal to . This means that the atoms are at
the same distance as the minima of this potential energy function. This maximizes the intermolecular forces
by giving the most negative potential. The atoms in the solid argon are held together by these strong forces
of attraction and are tightly packed to minimize empty space.

Differences in the Radial Distribution Function Figure 2: The radial distribution


functions of solid (T = 50 K), liquid (T
The radial distribution function relates the bulk density of a solid, liquid, or gas to the local density = 80 K), and gaseous argon (T = 300
K). The radii are given in reduced
at a distance from a certain molecule or atom. The equation that relates these parameters is found
units of the molecular diameter (
below.[2] Å .

The radial distribution functions of solid, liquid, and gaseous argon can be seen in Figure 2. In a solid, particles are found at defined positions,
which is shown by the discrete peaks at values of , , , , etc. The peaks of this radial distribution function are also broadened due to
the molecules fluctuating around their lattice positions and occupying slightly different positions in this range. The regions of the function with
are regions where there is a zero probability of finding another molecule or atom. There is a zero probability between the peaks in a solid
radial distribution function because of the regular structure where all of the molecules are packed tightly to most efficiently fill the space. This leaves
regular intervals of spaces where no atoms or molecules are present. Also, each peak in a radial distribution function represents a coordination
sphere where there is a high probability of finding molecules.[3] Each subsequent peak represents a coordination sphere that is farther from the
origin molecule and therefore the nearest neighbours are in the first coordination sphere. It is also important to note that when . In
this scenario, the electron density clouds of the two atoms are overlapping, causing the potential energy to be prohibitively high.

In contrast, the radial distribution function of a gas only has one peak/coordination sphere, which then decays to the bulk density, represented by
. This simple radial distribution function is a consequence of a density that is so low that only the interactions of individual pairs of gas
molecules affect the radial distribution function. The density is higher around the origin molecule due to strong london dispersion forces in this area,
but the forces decay off quickly. The radial distribution function of liquids also differs from that of the solids. Molecules in a liquid have the ability
to move around, but their positions are still correlated due to intermolecular forces between the molecules. This allows liquids to have periodic
peaks in the radial distribution function, as shown in Figure 2. Thus, liquids also have coordination spheres where it is more likely to find molecules
at these distances from an origin molecule, and thus there will be a greater local density at these positions. However, there is still a lower density
than in solids due to the fluidity of liquids and the molecules being able to change positions. The radial distribution function of a liquid has its peaks
at intervals of , which is due to the looser packing of the molecules in a liquid compared to in a solid.[2] This looser packing is due to the
coordination spheres not being bound to fixed positions. There is also a lower probability of finding molecules in the second coordination sphere
due to Pauli repulsion interactions with the first sphere. Due to the disordered nature of liquids the radial distribution function eventually decays to
one and returns to the bulk density at large values as the positions are no longer correlated to each other.

Simple liquids, such as liquid argon, are packed most efficiently to avoid repulsive interactions between the atoms, but there is still some spaces
between them. Solids are packed very tightly so that the empty space between them is as little as possible and most crevices are filled. Their fixed
positions allow them to maintain this tight packing of the atoms to minimize wasted space. Liquids also try to minimize this space, but are less
tightly packed than solids because of their ability to move around and change positions. They have more energy to overcome the intermolecular
forces correlating their positions. This difference in packing is seen in the radial distribution functions with the occurrence of the solid peaks at
closer intervals than the liquid peaks.
References
1. 2017. Basic Chemistry/States of Matter. https://en.wikiversity.org/wiki/Basic_Chemistry/States_of_matter (accessed November
10, 2017)
2. Rowley, C. Chemistry 4305 Radial Distribution Functions. Memorial University of Newfoundland, St. John's, NL. Accessed 10
November 2017.
3. Liquids and Solutions: 2nd Year Michaelmas Term: Introduction. http://rkt.chem.ox.ac.uk/lectures/liqsolns/liquids.html (accessed
November 10, 2017)
Definition of variables
Variable Quantity
potential energy
kinetic energy
H enthalpy
U internal energy
G Gibbs energy
A Helmholtz energy

dipole moment

quadrupole moment
charge
Molecular Dynamics Simulations Using NAMD
A NAMD simulation is initiated by running the program namd2 with the input file as its argument.
namd2 input.conf > output.log

Retrieved from "https://en.wikibooks.org/w/index.php?title=Molecular_Simulation/Printable_version&oldid=4028844"

This page was last edited on 28 January 2022, at 09:46.

Text is available under the Creative Commons Attribution-ShareAlike License.; additional terms may apply. By using this site, you agree to the Terms of Use
and Privacy Policy.

You might also like