Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Rossen Radev

Atomic Physics Department


University of Sofia, Bulgraia

Parallel Monte Carlo simulation of critical behaviour of finite systems

The interest in phase transitions in finite systems is two-fold: firstly, bulk properties of the
material can be simulate if the system is studied under periodic boundary conditions [1] and a
proper account for rounding and shifting of the measurable quantities is taken in the analysis.
Secondly, free finite systems, such as molecular clusters, are of interest due to their differences
from bulk systems [2].

1. Theoretical background
In this study we use Monte Carlo (MC) approach to investigate critical behaviour in clusters
containing TeF6 molecules. TeF6 provide a good example to study both the nature and the
temperature dependence of the structural transformations found in finite molecular clusters [3].
These are important for the creation and design of nanomaterials and nanodevices.

The observations [4] show a body-centered cubic (bcc) structure below 233K in the bulk, and,
as well, below the melting region for clusters. Below 100K a monoclinic or not well defined
structure could be observed depending on the thermal history of the finite cluster. This lack of
certainty makes the simulation of the thermal behaviour greatly important. Having in mind that
the number of ingredients (atoms) is very high in a molecular cluster and the interactions
between the atoms can be represented as pair-wise potential. This computational task is a
perfect physical example to be used in parallel computational approach.

Our model system contains octahedral rigid TeF6 molecules, which are allowed to rotate and
translate. They interact with pairwise Lennard-Jones atom-atom potentials and atom-atom
Coulomb term:

 σ 
12
σ    q q
6

v(q) = ∑ {
∑ αβ  r

αβ
 −  αβ   +  iα iβ
  r    r


}
 ij   ij    ij 
i< j α,β

[equ.1]
ri , j -distance between i-th and j-th atom.
α, β -denote either a fluorine or tellurium atom

Computational study of the phase transitions is a challenging task both in the Monte Carlo
(MC) and Molecular dynamics (MD) techniques. The two methods give complementary
information about the system properties and extensive parameters values. Unsettled problem is
the ergodicity in small systems. This is the reason that MC and MD approaches should be
implemented for the same system and the results should be compared. In a Monte Carlo
simulation we generate a Markov sequence of configurations and use it to obtain the statistical
(ensemble) averages. In a molecular dynamic simulation we generate a long trajectory over
which we calculate the time averages of the same quantities. Under the conditions of the
ergodic hypothesis these two should be the same, <A>time = <A> MC. In fact, the ergodicity
is a property of an infinite system studied in a thermodynamic limit. Hence, in a small system
we might expect that the two averages are different. An important question is how these values
converge towards the same value as a function of the system size. Another question is how
different are the results as functions of MC steps and the computational time in MD. Obviously
the simulation time has to be long enough (compared to the characteristic time of the studied
system) and the number of MC steps should be enormously great to claim reliable results.

From previous MD calculations we found that, in the case of orientational structural phase
transitions, it might happen that the phase space is splitted into weakly connected submanifolds
[5]. The system can be trapped in a metastable state with no chance of escaping from it. As a
result, in a single MC run the system is not able to explore the whole potential energy surface
available to it. This affects the computed averages of the thermodynamic quantities which are
biased and the final conclusion can be wrong.

One way to overcome the problems of a Metropolis sampling is to use the jump-walking [6-7].
In its general form, the method generates trial moves from a higher-temperature (Tj)
equilibrium distribution with a probability specified by the variable Pj. The remaining trial
moves are of conventional Metropolis character. In our study we use multi-temperature
generalizations [8] of the basic approach.

During the TRACS visit we develop a parallel J-walking code that enables us to carry out a
Monte Carlo simulation efficiently in a multi-processor computing environment. We applied
the program to the study of the thermodynamic properties of (TeF6)n, n=59 clusters.

2. Parallelization
In practice, J-walking can be implemented in two ways. The first approach writes the
configurations from the simulation at the J-walking temperature to an external file and access
configurations randomly from the external file while carryng out a simulation at the lower
temperature. It is necessary to access the external files randomly to avoid correlation errors [9].
The large storage requirements have limited the application of the method only to small
systems. The second approach uses tandem walkers, one at a high temperature where
Metropolis sampling is ergodic, and multiple walkers at lower temperatures.

The best features of these two approaches can be combined into single J-walking algorithm
[10] with the use of multiple processors and the MPI communication library. We incorporate
MPI functions into MC code to send and receive configuration geometries and potential
energies of the clusters. Instead of generating external distributions and storing them before the
actual simulation, we generate the required distributions during simulation and pass them to the
lower-temperature walkers.

Parallel J-Walking algorithm:


1. For each t make Metropolis MC steps.
• Rotate each molecule.
• Translate each molecule.
• Reject or accept step.
• Go to step 1.
2. After S1 steps -collect statistics:
• Potential energy histogram.
• Energy average and deviation.
• Heat capacity Cv.
• Save current configuration.
3. After S2 steps - make jump-walking step by exchanging the configurations using MPI.
4. Go to step 1.

The diagram of model J-walking is shown in fig.1. Each green square is a Metropolis MC
simulation at a particular temperature. The set of boxes in the right-hand side represents the
array of previous configurations of the system, which are stored in memory to avoid
corellations between the lower and higher temperature.
At each trial jump we choose randomly one of the 4 systems, fig. 2. When a configuration is
transmitted to a lower-temperature process, it is configuration randomly choosen from array of
higher temperature walker. The current configuration of the walker then replaces the
configuration just passed from the array to another temperature. In fig.2 we show the parallel
decomposition of computation. Each process computes part of one of 4 multi stage J-walking,
and exchange configurations and energy with others.

We use array sizes of 2500 configurations, which is limited of the size of processors RAM. The
arrays in the parallel method are small and do not inhibit applications of the method to large
systems. For the computations in the present work the number of MC passes for each
temperature is 6.105 for cluster size of 59 molecules (413 atoms). We make J-walk jump
attempts at every 50 Metropolis MC steps during the thermalisation and attempts at every 150
steps during the main computation.
To perform every MC step we need parallel streams of random numbers with good statistical
properties. For this purpose we use 'Mersenne Twister' - mt19937 [11].

mt19937:
• Recursive Matrix Method (MRMM).
• Period 219937
• 623 - dimensionally equidistributed.
• Passed many statistical tests.
• 4 times faster than RAND().

We use splitting technique to parallelize random generator (fig. 3):


• First we get the rank of each process.
• Multiplay it by some big number.
• And embed the result in some position in state vector of the generator.
• Get the first 1000 numbers from the generator to mix the state vector.

5. Results.
The main outputs of the TRACS education can be summarized as follows:
J-walking procedure for sampling the multi dimensional phase space of 59 TeF6 molecules
clusters has been implemented to solve the problem arising from weakly connected local
minimma; a code written in C has been developed.
The developed code has been parallelized with MPI to solve the problem with ergodicity and
computation time. It is not possible to retrieve information about more then one cluster
simultaneously in sequential code.
Parallelisation of MT-19937 Random Number Generator.
Using parallel code we make set of production runs for59 molecules cluster. The data obtained
from runs will be analyzed at home lab and the results will be published.

Program was ported, tested and optimized on SUN 3500 and CRAY-T3E machines. In our
program was implemented dynamic memory management for optimal usage of the memory.
We found that memory and performance requirement make CRAY-T3E more suitable for such
computations. In our runs we use 64 processors each with 64 MB RAM. Each run of 6.10^5
steps takes approcsimate 11h.

6. References.

[1] Monte Carlo Methods in statistical physics, edited by K. Binder (Springer - Berlin 1979)
[2] A. Proykova, R. S. Berry Z. Phys D. 40, 215 (1997)
[3] A. Proykova, R. Radev, Feng-Yin Li, R.S. Berry, J. Chem. Phys. 110, 3887 (1999)
[4] L.S.Bartell, E. Valente and J.C Calliat, J. Phys. Chem. 91, 2498 (1987).
[5] R. Radev, A. Proykova, Feng-Yin Li, R.S. Berry, J Chem. Phys. 109, 3596 (1998)
[6] D.D. Frantz, D.L. Freeman, J.D. Doll, J. Chem. Phys. 93, 2769 (1990).
[7] D.D. Frantz, D.L. Freeman, J.D. Doll, J. Chem. Phys. 97, 5713 (1992).
[8] D.L. Freeman, J.D. Doll, Ann. Rev. Phys. Chem. 47, 43 (1996).
[9] D.D. Frantz, J. Phys. Chem. 102, 3747 (1995).
[10] A. Metro, D.L. Freeman and .Q. Topper, J. Chem. Phys. 104, 8690 (1996).
[11] M. Matsumoto, T. Nashimura, Mersenne Twister: ACM Trans. On Modeling and
Computer Simulation, vol. 8, 1, pp3-30, Jan (1998).

You might also like