This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A portable and flexible implementation of the Wang–Landau algorithm in order to determine the Density of States

Felipe Moreno [email protected] Sergio Davis [email protected] Joaquín Peralta [email protected] Departamento de Física, Facultad de Ciencias Exactas, Universidad Andrés Bello, Sazié 2212, Santiago, Chile. Comisión Chilena de Energía Nuclear, Casilla 188-D, Santiago 7600713, Chile.
Abstract

In this work we develop an implementation of the Wang–Landau algorithm [Phys. Rev. Lett. 86, 2050-2053 (2001)]. This algorithm allows us to find the density of states (DOS), a function that, for a given system, describes the proportion of states that have a certain energy. The implementation uses the Python and C++ languages for the algorithm itself, and it can take advantage of any library, such as the powerful LAMMPS library, for the computation of energy. Therefore, the resulting implementation is simple and flexible without sacrificing efficiency. This implementation also considers recent developments in the parallelization of the code for faster computation. We establish the soundness and effectiveness of our implementation by studying well-known systems such as the Ising model, the Lennard–Jones and EAM solids. We have found that our implementation can find the DOS with very good precision in a reasonable amount of time. Therefore, we are equipped with a very powerful and flexible implementation that can be easily used in order to study more realistic models of matter.

keywords:
Wang–Landau, Density of States, Simulation, Parallel Computing, Python
journal: Computer Physics Communications

PROGRAM SUMMARY

Program Title: Republica Wang–Landau (RWL)
CPC Library link to program files: (to be added by Technical Editor)
Code Ocean capsule: (to be added by Technical Editor)
Licensing provisions: GPLv3
Programming language: Python, C++
Nature of problem: An implementation of the WL algorithm that is flexible enough to be used for a large variety of systems.
Solution method: This implementation separates the actual Wang–Landau code of the abstract implementation of the system. Therefore, any system can be attached as a walker—a Python class that represents the system.
Additional comments including restrictions and unusual features; As examples, basic systems such as the Ising model are included plus a wrapper class for a LAMMPS walker to be used for any system supported by LAMMPS.

1 Introduction

The density of states (DOS), denoted by Ω(E)\Omega(E) and defined as Ω(E)=𝑑𝐫𝑑𝐩δ(EH(𝐫,𝐩))\Omega(E)=\int d\mathbf{r}d\mathbf{p}\,\delta(E-H(\mathbf{r},\mathbf{p})), where H(𝐫,𝐩)H(\mathbf{r},\mathbf{p}) is the Hamiltonian of the system, is a fundamental function of a system such that Ω(E)dE\Omega(E)dE is the number of states that have an energy between EE and E+dEE+dE. This function can be used to compute all of the thermodynamic properties of a system. Examples of such properties are the caloric curve and the heat capacity, both being highly relevant to the study of a wide variety of systems. Therefore, computing the DOS is of fundamental importance, and consequently several methods for the calculation of this function for a given system have been developed [1, 2, 3], some of them based on generalized ensembles and reweighting techniques.

Among the generalized-ensemble methods, Wang-Landau (WL) is a well known Monte Carlo technique for computing the DOS of a system [4]. This method was originally developed for discrete systems and having single processors in mind. However, since then the method has been expanded to off-lattice systems and parallel processing[5]. This has been done in order to take full advantage of the powerful multicore processing units that are the standard today. Although the WL algorithm is widely used and has a large number of variants and improvements [6, 7, 8], there are still practical difficulties in its implementation, ranging from the fine-tuning needed for an optimal operation to the lack of a universal implementation able to be “plugged into” traditional simulation codes.

In this work, we present a state-of-the-art implementation of WL using the flexibility and portability of the Python programming language [9], coupled with the efficiency of the C++ language. This implementation takes advantage of two well-known libraries: The powerful LAMMPS [10] library, which will be used to compute the energy of a complex configuration of atoms, and the easy-to-use Python multiprocessing library for parallel computing [11]. Furthermore, this implementation supports both discrete and continuous systems, while either supporting or being ready to support established techniques to improve the algorithm such as the optimization of flatness criteria [12], parallelization scheme by replica-exchange [5], and more efficient Monte Carlo trial moves [13] among others. Some improvements to the algorithm can be relatively easy to incorporate in the code, such as the WL-1/t1/t variant of Belardinelli et al. [12], which is already included in our code, and or the Stochastic Approximation in Monte Carlo (SAMC) of Liang et al. [14]. Here we are presenting a well-modularized system that allows the scientific community to use and incorporate different aspects of the WL algorithm efficiently.

The applicability of the WL algorithm is quite broad, covering different research areas such as multidimensional integration [15], protein folding [16], thermodynamics of polymers [17], free energy profiles [18], and the study of spin-glass models [19, 20, 21], among others. Here we focus the technique mainly on the calculation of the caloric-curve for different cases, to expand its use towards the condensed matter community. We tested this implementation using a variety of well-established benchmark systems such as the Ising model, the Potts model, classical and quantum harmonic oscillators, the Lennard-Jones model and the embedded atom model (EAM). The goal of these tests was to prove the soundness of our implementation through the comparison with previously known results.

This paper is organized as follows. Following this introduction, in Section 2 we describe the inner workings of the Wang–Landau algorithm. In Section 3 we offer a brief description of this implementation. In Section 4 we provide details of the benchmark simulations performed. A discussion of our findings is given in Section 5.

2 A brief review of the Wang-Landau algorithm

The WL algorithm is used to find the DOS of a many-particle system with NN degrees of freedom. This system can be described by a configurational state X=(X1,X2,,XN)X=(X_{1},X_{2},\ldots,X_{N}) and a function of XX that describes the system energy E=(X)E=\mathcal{H}(X). This function is called the Hamiltonian.

In order to produce the DOS of a system, the Wang–Landau algorithm takes advantage of the following property. Suppose we sample a large number of possible states according to a distribution that depends only on the Hamiltonian of each state P(X)=ρ((X))P(X)=\rho(\mathcal{H}(X)). If this distribution is chosen to be

ρ((X))1Ω((X)),\rho(\mathcal{H}(X))\propto\frac{1}{\Omega(\mathcal{H}(X))}, (1)

then the energies will be distributed according to

P(E)=ρ(E)Ω(E)=constant.P(E)=\rho(E)\Omega(E)=\text{constant}. (2)

Consequently, we would build up a flat histogram. Thereby, we can use this property to test how good a candidate DOS is, and even better, we can determine how to make a small change to it so that it is closer to the true DOS. We keep on making those small changes to the candidate DOS until the histogram is flat enough. The family of algorithms that takes advantage of this property are known as flat histogram methods [22].

The specific steps to reproduce the above procedure for discrete systems is detailed now. At the start of our simulation we define a candidate DOS of the system, Ω(E)=1\Omega(E)=1 for all EE. We define a walker as an entity that wanders over the configuration space standing on an unique configuration at every simulation step. The energy of a walker is the energy of the configuration where this walker is currently standing on. We start out with a walker on a random state XX, and then we produce a trial move from XX to XX^{\prime}. The transition acceptance that is in concordance with Eq. 1 and the detailed balance condition is

W(XX)=min(1,Ω((X))Ω((X))).W(X\to X^{\prime})=\min\left(1,\frac{\Omega(\mathcal{H}(X))}{\Omega(\mathcal{H}(X^{\prime}))}\right). (3)

If the trial move is accepted, the walker moves to the new state XX^{\prime}; otherwise, it remains on its previous state XX. Whether the change was accepted or not, we update the value of the DOS at the current energy EE according to Ω(E)Ω(E)f\Omega(E)\leftarrow\Omega(E)\cdot f, where f>1f>1. Based on the numeric nature of the algorithm, the values of these constants and their multiplication Ω(E)Ω(E)f\Omega(E)\leftarrow\Omega(E)\cdot f must be expressed in terms of their logarithms. Additionally, we update the histogram that counts the number of times certain energy has been visited. This process continues until the histogram is sufficiently flat. The flatness criterion is that the minimum value of the histogram must be at least 80% of its mean value. Once this criterion has been met, the process is interrupted, the value of ff is decreased according to fff\leftarrow\sqrt{f}, the histogram is reset, and the same process is started over again. Finally, the algorithm is stopped when ff reaches a value close enough to unity, such as f1+108f\approx 1+10^{-8}, and the final Ω(E)\Omega(E) is considered to be the true DOS.

For continuous systems, the energies are discretized into small bins of width ΔE\Delta E. Each bin has its own Ω\Omega and histogram, and they are updated when the energy of the walker lies between EΔE/2E-\Delta E/2 and E+ΔE/2E+\Delta E/2. The transition probability is computed after previously performing an interpolation on the discrete Ω\Omega, thus obtaining Ω(E)\Omega(E) from the interpolated function. This last step provides far better results than merely choosing the discrete Ω\Omega of the bin that contains EE. This is because the walker will prefer a transition to a less probable energy, even if this new energy and the current one belong to the same bin. Furthermore, instead of the discrete case where the way in which we produce a change is obvious, we must select an appropriate step size, or alternatively, we can use a variable step size through the definition of a target acceptance. A target acceptance of 30% has been used in the continuous systems.

Refer to caption
Figure 1: The entire energy range is split up into overlapping windows.

The algorithm is parallelized in the following way. Firstly, the entire energy range is split up into overlapping windows of energy (Fig. 1). An overlap of 75% has been reported to provide good results [5]. Secondly, every window is given its own walker and then a so-called “sweep” is executed—the performing of a certain number of simulation steps inside of every window. These steps run in parallel and they are completely independent. Thirdly, a replica exchange procedure [5] is tried. This procedure consists in the exchange of walkers between two windows. For this exchange to occur, the windows must be adjacent and their walkers must have an energy that lies over the overlapping interval of their respective windows. If the previous conditions are met for windows ii and jj, the exchange is performed according to the probability

P(wiwj)=min(1,Ωi(Ei)Ωj(Ej)Ωi(Ej)Ωj(Ei)),P(w_{i}\leftrightarrow w_{j})=\min\left(1,\frac{\Omega_{i}(E_{i})\Omega_{j}(E_{j})}{\Omega_{i}(E_{j})\Omega_{j}(E_{i})}\right), (4)

where ww, EE and Ω\Omega are the walker, the walker’s current energy and the current DOS of their respective windows. This procedure allow walkers to travel through the entire energy landscape thus avoiding the presence of energy traps. Fourthly, we repeat the last two steps until every window has ended its own simulation. The windows that have already ended their simulations are required to continue running them without updating their DOS. This is required because we need that those windows keep on performing replica exchanges with the active ones. Finally, we reconstruct the full DOS from the pieces of DOS obtained at every window, as follows.

In order to reconstruct Ω\Omega, we must consider that we want to produce the smoothest curve possible, in particular, we want the microcanonical caloric curve β(E)=(logΩ(E))/E\beta(E)=\partial(\log\Omega(E))/\partial E to be smooth. Having this in mind, we select the pieces of Ω\Omega that belong to the first (Ω1\Omega_{1}) and second (Ω2\Omega_{2}) windows, and then we join them through the following procedure. Firstly, we compute β1(E)=(logΩ1(E))/E\beta_{1}(E)=\partial(\log\Omega_{1}(E))/\partial E and β2(E)=(logΩ2(E))/E\beta_{2}(E)=\partial(\log\Omega_{2}(E))/\partial E and we select the energy EE^{*} where the best match between these two functions is. In our second step, we define Ω¯2=[Ω1(E)/Ω2(E)]Ω2\bar{\Omega}_{2}=[\Omega_{1}(E^{*})/\Omega_{2}(E^{*})]\Omega_{2} so that Ω1\Omega_{1} and Ω¯2\bar{\Omega}_{2} match their values at EE^{*}. Thirdly, we join Ω1\Omega_{1} and Ω¯2\bar{\Omega}_{2} at EE^{*}, cutting out the leftover “tails” of each function. Finally, the above steps are repeated between the previous pasted Ω\Omega and the next piece of DOS until there are no more pieces left. By this previous procedure, we can now join the DOS from the energy windows into a DOS that spans the whole energy range.

3 Implementation

Refer to caption
Figure 2: Diagram of the main process splitted into different subprocesses with a backend.

Before discussing the implementation details, we need to address some early issues. Firstly, we need to generalize the definition of the DOS to include discrete systems. For discrete systems Ω(E)\Omega(E) will be simply the number of states with exact energy EE. Secondly, to avoid treating with very large numbers, the logarithm of the DOS (logΩ(E)\log\Omega(E)) will be computed instead of the plain DOS [23, 24]. All equations will be transformed accordingly. Thirdly, after each sweep logΩ(E)\log\Omega(E) will be normalized such as that logΩ(Emin)=0\log\Omega(E_{\text{min}})=0 or, equivalently, Ω(E)\Omega(E) will be computed in units of Ω(Emin)\Omega(E_{\text{min}}).

This implementation is based on several modules. Firstly, we have a module that loads the simulation data such as the energy range, step size, target acceptance and so on from an input file. Secondly, there is the module that runs the actual simulations. This module spawns the subprocesses parallelization that will run the WL simulation on every window, performs the replica exchanges, merges the DOS pieces and writes the result into a file (Fig. 2). Finally, there are the walker modules, one for each system. These walkers keep track of the state of the system, perform changes, and compute the energy. Furthermore, those walkers are totally independent of the algorithm itself, so they can be used for any other algorithm and they can be coded in another language and wrapped into Python code. For example, a LAMMPS walker has been developed which contains a wrapper for the C++ LAMMPS library. Any other library can be used through a wrapper class. Because of this, this implementation is not affected by the lesser computational efficiency of an interpreted language such as Python, since the time consuming functions can be implemented in a faster language like C++ or Julia.

It is crucial to notice that in this current version, the parallelization is not based on the MPI library but in the Python multiprocessing module, which is designed for multi-core systems. The user has to look out for possible conflicts between this module and the parallelization of the walker in case it is an external process.

3.1 Walkers

Every Walker is a Python class that contains at least the following methods:

  • 1.

    energy(): This function returns the current energy of the system. The function does not necessarily computes the energy—in some cases, it is useful to have it pre-computed beforehand.

  • 2.

    change(): This function makes a change in the system. The new energy can be computed immediately or when the previous function is invoked. No return value.

  • 3.

    undo(): This function undo the last change performed by the above function. No return value.

  • 4.

    state()/state(state): These are a pair of setter and getter functions for the abstract representation of the state of the system. The first one set the state of the system and the second one returns the current state. This abstract representation can be any serializable Python object. These methods are needed in order to perform replica-exchanges.

An additional method can be used to set up an specific initial configuration for each simulation window:

  • 1.

    setup(index): This method (if exists) will be invoked after the construction of the walker. An unique index representing the window that contains the walker will be passed as an argument. This way the user can perform specific tasks for each walker.

The implementation includes as examples some walkers for several common systems such as the Ising model, the Potts model and the harmonic oscillators (quantum and classical). They are thin wrappers over efficient C++ code. A LAMMPS walker is also included. This walker allows the simulation of any system supported by LAMMPS. Those walkers are not linked to a WL simulation in any way, so they can be easily imported and used for any other algorithm that complement an investigation such as the Metropolis–Hastings or Simulated Annealing algorithms.

There are two ways to run a simulation using this module. Firstly, the module can be imported into a Python script file where the user can create the main WL object and invoke the simulation with the run() method. Secondly, a script that loads the simulation from an input file is included.

3.2 Running

The algorithm starts with an user defined initial configuration. Next, the program will explore the energy landscape in order to find a set of configurations whose energies belong to every energy window. This is necessary because we need an initial configuration to start every subprocess. The algorithm used is very basic, it just explores the energy landscape randomly until it finds a suitable configuration. The user can provide her own previously defined configurations using the walkers setup() method. Otherwise, for best results, it is preferable to previously search for the lowest energy configuration using a more powerful algorithm, and to start the simulation using this configuration. This is because the lower energies are usually the most difficult to find. For LAMMPS systems, it is best to use the LAMMPS own minimization algorithms. Notice that if any energy window has an energy range that is impossible to arrive at from the initial configuration, the algorithm can not run.

4 Simulations and computational procedures

A set of different simulations were performed in this work in order to determine the accuracy and practical usefulness of our implementation. For every system, instead of computing the full DOS, we only computed the configurational density of states (CDOS). This means that instead of using the full Hamiltonian =K+ϕ\mathcal{H}=K+\phi, we only used the configurational part ϕ\phi. This is done because the contribution of the kinetic part KK is always the same and it is easy to compute analytically.

The caloric curve for both microcanonical and canonical ensembles has been computed using the CDOS (Ω(ϕ)\Omega(\phi)) obtained in the simulation. If dd is the number of degrees of freedom in the system, the microcanonical curve has been computed according to

βE=Elogη(E),\expectationvalue{\beta}_{E}=\partialderivative{E}\log\eta(E), (5)

where

η(E)Ω(ϕ)[Eϕ]d21𝑑ϕ\eta(E)\equiv\int\Omega(\phi)\left[E-\phi\right]^{\frac{d}{2}-1}d\phi (6)

is a function that is proportional to the full DOS [25], and the canonical curve has been computed according to the well known equation

Eβ=d2ββlogZ(β),\expectationvalue{E}_{\beta}=\frac{d}{2\beta}-\partialderivative{\beta}\log Z(\beta), (7)

where

Z(β)Ω(ϕ)eβϕ𝑑ϕZ(\beta)\equiv\int\Omega(\phi)e^{-\beta\phi}d\phi (8)

is the configurational part of the partition function, and d/2βd/2\beta is the kinetic energy term.

The results are presented below. The discrete models and the classical harmonic oscillator are presented as a proof of the soundness of our implementation. The Lennard–Jones and EAM solids are the most important results because they show how our code works seamlessly with the LAMMPS library which enables us to study a great quantity of systems with high precision.

The number of steps per second reported for each system corresponds to standard simulations performed using 88 walkers running on an Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 88 virtual cores on 44 physical cores.

4.1 Discrete Models

Refer to caption
Refer to caption
Figure 3: CDOS (top) and caloric curve (bottom) for the 2-dimensional Ising model on a 10×1010\times 10 lattice. The inset shows the specific heat. Note that the caloric curve unusually contains the contribution of the kinetic energy of the system, only for purposes of comparison with atomistic models.

The Ising model is the first discrete model studied, it is a well known model of interacting spins. The spins can take two values—up and down. This is one of the most simple models that exhibit a phase transition. The interaction between the spins is described by

ϕ=Ji,j𝝈𝒊𝝈𝒋,\phi=-J\sum_{\langle i,j\rangle}\bm{\sigma_{i}}\cdot\bm{\sigma_{j}}, (9)

where the sum is over adjacent pairs. Usually, there is an additional term with the contribution of an external magnetic field. We use an 2D Ising Model of N=100N=100 spins in a 10×1010\times 10 lattice without the magnetic field. The continuous transition can be observed in the caloric curve computed in the simulation shown in Fig. 3. The transition temperature is kBTc/J=2.332k_{B}T_{c}/J=2.332 which is close to the well-known exact transition temperature kBTc/J=2.269k_{B}T_{c}/J=2.269 of this model [26]. The number of steps per second for this system is 145733±44577145733\pm 44577.

Refer to caption
Refer to caption
Figure 4: CDOS (top) and caloric curve (bottom) for the Potts model with q=10q=10 on a 10×1010\times 10 lattice. The inset shows the specific heat. Note that the caloric curve unusually contains the contribution of the kinetic energy of the system, only for purposes of comparison with atomistic models.

A more generalized discrete study case is the Potts model [27]. This is a generalized Ising model of interacting classical spins qq on a regular lattice, which has been widely used in multiple research areas [28, 29, 30, 20, 21, 22]. In this model the spins can take qq different “orientations”, and its interaction energy is given by

ϕ=Ji,jδ(σi,σj)\phi=-J\sum_{\langle i,j\rangle}\delta(\sigma_{i},\sigma_{j}) (10)

where the sum is over adjacent pairs and δ\delta is the usual Kronecker delta. In the particular case of q=2q=2 we recover the 2D-Ising model up to a multiplicative factor. In what follows, we use a value of q=10q=10 on a 10×1010\times 10 2-dimensional lattice. This model exhibits a first-order phase transition which can be observed looking at the caloric curve obtained in the simulation (Fig.  4).

We obtained the same result than in our previous work where the phase transitions and metastable phases of this model have been characterized [31]. The metastable region lies inside the S-shaped loop in the microcanonical curve. The number of steps per second for this system is 145142±38669145142\pm 38669.

Refer to caption
Refer to caption
Figure 5: CDOS (top) and caloric curve (bottom) for a system of 100 Quantum Harmonic Oscillators. The inset shows the specific heat.

The third, and last, discrete model is a system of NN identical 1-dimensional quantum harmonic oscillators [32, 33] described by

ϕ=12ω(i=1Nni+12),\phi=\frac{1}{2}\hbar\omega\left(\sum_{i=1}^{N}n_{i}+\frac{1}{2}\right), (11)

where nin_{i} is a non-negative integer that is equal to the quantum number of the i-th oscillator. We consider N=100N=100 oscillators. The plot (Fig. 5) shows how the caloric curve starts with a very low slope and then this slope gradually reaches the well-known [34] classical value E/NkBT=1E/Nk_{B}T=1. The number of steps per second for this system is 154264±28601154264\pm 28601.

4.2 Continuous Systems

Refer to caption
Refer to caption
Figure 6: CDOS (top) and caloric curve (bottom) for a system of 100 Classical Harmonic Oscillators. The inset shows the specific heat.

As a first example of a continuous system we consider a system of NN identical 1-dimensional classical harmonic oscillators described by

ϕ=12mω2i=1Nxi2.\phi=\frac{1}{2}m\omega^{2}\sum_{i=1}^{N}x_{i}^{2}. (12)

where xix_{i} describes the position of the i-th oscillator. The energy is computed in units of mωx02m\omega x_{0}^{2} where x0x_{0} is an arbitrary length. At every step a random oscillator is displaced a random quantity drawn from an uniform distribution [x0,x0][-x_{0},x_{0}]. As expected, in Fig. 6 we obtain a straight line as the caloric curve [34], with the correct slope of 1. The number of steps per second for this system is 169008±46026169008\pm 46026.

Refer to caption
Refer to caption
Figure 7: CDOS (top) and caloric curve (bottom) for a Lennard–Jones system with N=120N=120 atoms. The inset shows the specific heat.

The second continuous system studied is a Lennard–Jones (LJ) solid by using the energy provided by the classical LJ potential [35]. The LJ potential is described by

ϕ=4ϵi.j<i[(σrij)12(σrij)6].\phi=4\epsilon\sum_{i.j<i}\left[\left(\frac{\sigma}{r_{ij}}\right)^{12}-\left(\frac{\sigma}{r_{ij}}\right)^{6}\right]. (13)

where the sum is over every pair of atoms. We utilize a cut-off value of 2.5σ2.5\sigma which it means that we neglect the contribution of pairs where rij>2.5σr_{ij}>2.5\sigma. The structure is made of N=120N=120 atoms with masses mm in a cubic cell of side 15σ15\sigma. This potential and the EAM potential are the most important model studied since they use the LAMMPS library. For this model, an initial minimization has been performed in order to find the low energy solid states, thus determining an appropriate energy range for our simulation. Looking at the caloric curve (Fig. 7) we observe a transition temperature of kBT/ϵ0.7k_{B}T/\epsilon\approx 0.7 which agrees with the literature. This model produces very accurate results for noble gases, for example, for the Argon solid in which ϵ=1.03×102 eV\epsilon=1.03\times 10^{-2}\text{ eV}, this model reports a temperature of melting close to T=83.59 KT=83.59\text{ K}. The number of steps per second for this system is 2301±2772301\pm 277.

Refer to caption
Refer to caption
Figure 8: CDOS (top) and caloric curve (bottom) for an EAM system with N=432N=432 atoms. The inset shows the specific heat.

As a last study case, we use the embedded atom model (EAM) interatomic potential. The EAM is widely used in metallic systems, and incorporate a local density on the atomic interaction. A generalized form of this type of potential es given by

Ei=Fα(ijρβ(rij))+12ijϕαβ(rij)E_{i}=F_{\alpha}\left(\sum_{i\neq j}\rho_{\beta}(r_{ij})\right)+\frac{1}{2}\sum_{i\neq j}\phi_{\alpha\beta}(r_{ij}) (14)

where rijr_{ij} is the distance between atoms ii and jj, ϕαβ\phi_{\alpha\beta} is a pair-wise potential function, ρβ\rho_{\beta} is the contribution to the electron charge density from atom jj of type β\beta at the location of atom ii, and FF is an embedding function that represents the energy required to place atom ii of type α\alpha into the electron cloud. Even when this kind of potentials have been widely used [36, 37, 38], they are still highly time-consuming on computational procedures, and therefore they provide a useful testing ground in order to define in an accurate way the scalability of our implementation of the WL-algorithm. The parameters used in the simulations are the ones that fit the energies for Fe atoms [39]. The results are presented in Fig. 8. The WL algorithm has been tested on EAM models previously [40, 41], presenting an accurate estimation of the DOS. The number of steps per second for this system is 6.32±0.126.32\pm 0.12.

5 Concluding remarks

Our work presents a modular implementation of the original Wang–Landau algorithm enhanced with state-of-the-art developments such as off-lattice systems and parallelization. It also integrates seamlessly with the LAMMPS library for fast computing of the energy of atomic configurations, which means that we can, in principle, study any system that can be described by a potential supported by LAMMPS, or, in principle, any other atomistic simulation package for which a wrapper can be written. The results obtained in our simulation of well known systems matched what we expected, including the specially important cases of the Lennard-Jones system and the EAM system because these are the models that run with the LAMMPS library. Consequently, our implementation of the WL algorithm can be used from now on to efficiently study the properties of more realistic models of matter.

Acknowledgments

SD acknowledges partial support by ANID FONDECYT 1171127 and ANID PIA ACT172101 grants. FM thanks to Universidad Andrés Bello for providing the doctorate scholarship and acknowledges funding from ANID BECAS/DOCTORADO NACIONAL 21212267. Computational work was supported by the supercomputing infrastructures of the NLHPC (ECM-02), and FENIX (UNAB).

References

  • Shimizu [2004] H. Shimizu, Phys. Rev. E 70, 056704 (2004).
  • Habeck [2007] M. Habeck, Phys. Rev. Lett. 98, 200601 (2007).
  • Montecinos et al. [2021] A. Montecinos, C. Loyola, J. Peralta, and S. Davis, Phys. A 562, 125279 (2021).
  • Wang and Landau [2001] F. Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001).
  • Vogel et al. [2013] T. Vogel, Y. W. Li, T. Wüst, and D. P. Landau, Phys. Rev. Lett. 110, 210603 (2013).
  • Liang [2005] F. Liang, Journal of the American Statistical Association 100, 1311 (2005).
  • Atchadé and Liu [2010] Y. F. Atchadé and J. S. Liu, Statistica Sinica 20, 209 (2010), ISSN 10170405, 19968507.
  • Bornn et al. [2013] L. Bornn, P. E. Jacob, P. D. Moral, and A. Doucet, Journal of Computational and Graphical Statistics 22, 749 (2013).
  • van Rossum [1995] G. van Rossum, Tech. Rep. CS-R9526, Centrum voor Wiskunde en Informatica (CWI), Amsterdam (1995).
  • Plimpton [1995] S. Plimpton, J. Comput. Phys. 117, 1 (1995).
  • Dalcin et al. [2011] L. D. Dalcin, R. R. Paz, P. A. Kler, and A. Cosimo, Advances in Water Resources 34, 1124 (2011).
  • Belardinelli and Pereyra [2007] R. E. Belardinelli and V. D. Pereyra, Phys. Rev. E 75, 046701 (2007).
  • Wüst and Landau [2009] T. Wüst and D. P. Landau, Phys. Rev. Lett. 102, 178101 (2009).
  • Liang et al. [2007] F. Liang, C. Liu, and R. J. Carroll, Journal of the American Statistical Association 102, 305 (2007).
  • Tröster and Dellago [2005] A. Tröster and C. Dellago, Phys. Rev. E 71, 066705 (2005).
  • Rathore et al. [2003] N. Rathore, T. A. Knotts, and J. J. de Pablo, The Journal of Chemical Physics 118, 4285 (2003).
  • Taylor et al. [2009] M. P. Taylor, W. Paul, and K. Binder, The Journal of Chemical Physics 131, 114907 (2009).
  • Calvo [2002] F. Calvo, Molecular Physics 100, 3421 (2002).
  • Brown and Schulthess [2005] G. Brown and T. C. Schulthess, Journal of Applied Physics 97, 10E303 (2005).
  • Torbrügge and Schnack [2007] S. Torbrügge and J. Schnack, Phys. Rev. B 75, 054403 (2007).
  • Alder et al. [2004] S. Alder, S. Trebst, A. K. Hartmann, and M. Troyer, Journal of Statistical Mechanics: Theory and Experiment 2004, P07008 (2004).
  • Landau and Binder [2014] D. P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics (Cambridge, 2014).
  • Landau et al. [2004] D. P. Landau, S.-H. Tsai, and M. Exler, American Journal of Physics 72, 1294 (2004).
  • Barash et al. [2017] L. Y. Barash, M. A. Fadeeva, and L. N. Shchur, Phys. Rev. E 96, 043307 (2017).
  • Davis et al. [2020] S. Davis, J. Jain, and B. Bora, Phys. Rev. E 102, 042137 (2020).
  • Baxter [2016] R. J. Baxter, Exactly solved models in Statistical Mechanics (Elsevier, 2016).
  • Wu [1982] F. Y. Wu, Rev. Mod. Phys. 54, 235 (1982).
  • Turner and Sherratt [2002] S. Turner and J. A. Sherratt, J. Theor. Biol. 216, 85 (2002).
  • Davis et al. [2014] S. Davis, Y. Navarrete, and G. Gutiérrez, Eur. Phys. J. B 87, 78 (2014).
  • Farías and Davis [2020] C. Farías and S. Davis, arXiv e-prints arXiv:2008.04368 (2020), 2008.04368.
  • Moreno et al. [2018] F. Moreno, S. Davis, C. Loyola, and J. Peralta, Phys. A 509, 361 (2018).
  • Dekker [1981] H. Dekker, Physics Reports 80, 1 (1981), ISSN 0370-1573.
  • Um et al. [2002] C.-I. Um, K.-H. Yeon, and T. F. George, Physics Reports 362, 63 (2002), ISSN 0370-1573.
  • Greiner et al. [2012] W. Greiner, L. Neise, and H. Stöcker, Thermodynamics and statistical mechanics (Springer Science & Business Media, 2012).
  • Lennard-Jones [1931] J. E. Lennard-Jones, Proceedings of the Physical Society 43, 461 (1931).
  • Daw and Baskes [1984] M. S. Daw and M. I. Baskes, Phys. Rev. B 29, 6443 (1984).
  • Daw et al. [1993] M. S. Daw, S. M. Foiles, and M. I. Baskes, Materials Science Reports 9, 251 (1993), ISSN 0920-2307.
  • Ackland and Bonny [2020] G. J. Ackland and G. Bonny, pp. 544–572 (2020).
  • Malerba et al. [2010] L. Malerba, M. Marinica, N. Anento, C. Björkas, H. Nguyen, C. Domain, F. Djurabekova, P. Olsson, K. Nordlund, A. Serra, et al., Journal of Nuclear Materials 406, 19–38 (2010).
  • Perera et al. [2016] D. Perera, T. Vogel, and D. P. Landau, Phys. Rev. E 94, 043308 (2016).
  • Aleksandrov et al. [2010] T. Aleksandrov, C. Desgranges, and J. Delhommelle, Fluid Phase Equilibria 287, 79 (2010).