Accelerating Simulations In the Gibbs and Canonical Ensembles with GPUs

Monday, October 17, 2011
Exhibit Hall B (Minneapolis Convention Center)
Jason R. Mick, Chemical Engineering and Materials Science, Wayne State University, Detroit, MI and Jeffrey J. Potoff, Chemcial Engineering and Materials Science, Wayne State University, Detroit, MI

The use of molecular simulation to study complex physical phenomena at the atomic level has grown exponentially over the last decade with increasing CPU power and the development of parallel molecular dynamics codes that scale efficiently over thousands of processors.  As a result, the simulation of 100,000 atom systems has become routine, state of the art calculations are now possible on systems of over 1 million atoms, examples of which include the tobacco mosaic virus[1] and the ribosome[2].  Despite our current ability to simulate such systems, there is a demand to simulate increasingly larger systems for longer timescales.

Graphics processors (GPUs) have begun to play a crucial role in extending molecular simulation to even larger length and time scales [4-6]. In comparison to CPUs, GPUs possess a highly parallel structure allowing for more effective processing of large data sets. For this reason, intrinsic parallel traits prevalent in particle/molecular simulations may be exploited for more efficient computation on the GPU [7,8]. Each multiprocessor in the CUDA programming model has the ability of executing a significant number of threads in parallel. These threads, organized into individual blocks, work cooperatively to efficiently share and synchronize operations.

In this work a GPU-accelerated Monte Carlo simulation engine, capable of performing simulations in the canonical, isobaric-isothermal and Gibbs ensembles[6], is presented.  The code uses a modular architecture and is written in C++.  Sections of the code designed to run on the GPU, such as the calculation of pairwise interactions, utilize the Compute Unified Device Architecture (CUDA) from NVIDIA Corp.  In comparison to optimized serial, CPU-bound code, our GPU-accelerated Monte Carlo simulation engine provides an order of magnitude reduction in wall-clock time for a system containing 65,536 particles interacting via Lennard-Jones potentials.  Similar performance increases are observed for simulations in the Gibbs ensemble.

1.         Freddolino, P.L., et al., Molecular dynamics simulations of the complete satellite tobacco mosaic virus. Structure, 2006. 14(3): p. 437-49.

2.         Sanbonmatsu, K.Y., S. Joseph, and C.S. Tung, Simulating movement of tRNA into the ribosome during decoding. Proc Natl Acad Sci U S A, 2005. 102(44): p. 15854-9.

3.         Stone, J.E., et al. Accelerating molecular modeling applications with graphics processors. Journal of Computational Chemistry, 2007. 28(16): p. 2618-40.

4.         AMBER 11 NVIDIA GPU ACCELERATION SUPPORT. (accessed April 29, 2011)

5.         Brown, W.M., Porting LAMMPS to GPUs. SOS 14 Conference.

6.         Panagiotopoulos, A.Z., Direct determination of phase coexistence properties of fluids by Monte Carlo simulation in a new ensemble. Molecular Physics. 1987, 61(4):p. 813-26.

7.         Anderson, J.A., et al., General purpose molecular dynamics simulations fully implemented on graphics processing units. Journal of Computational Physics, 2008. 227(10): p. 5342-59.

8.         Liu, W., et al., Molecular dynamics simulations on commodity GPUs with CUDA. Proceedings of the 14th International Conference on High Performance Computing, 2007. p. 185-96.

Extended Abstract: File Not Uploaded