Publication:
Accelerating Atomistic Simulations with GPUs and Machine Learning

No Thumbnail Available

Date

2024-05-31

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Johansson, Anders. 2024. Accelerating Atomistic Simulations with GPUs and Machine Learning. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.

Research Data

Abstract

The field of atomistic simulations is rapidly evolving. The simulations are becoming more reliable and accurate through the invention of ever more advanced ML potentials such as uncertainty aware and equivariant methods. Simultaneously, the leading GPU-based supercomputers are finally reaching the exascale, which paves the way for larger and longer simulations. This thesis revolves around the intersection of these advances and the acceleration of state-of-the-art ML potentials with modern GPUs on the world's largest supercomputers. More concretely, my focus has been the performance portability and scalability of FLARE and Allegro, two ML potentials near the opposite ends of the cost-accuracy trade-off. FLARE is a sparse Gaussian process potential that attempts to push the envelope for speed while maintaining reasonable accuracy. I have developed a Kokkos implementation of FLARE that outperformed previous state-of-the-art methods by 70\% on the second fastest supercomputer in the world, and recently reached one trillion atoms on Frontier, the fastest supercomputer. Allegro is an equivariant neural network implemented in PyTorch which sacrifices some speed to reach leading accuracy while maintaining scalability through its innovative architecture. Through the scalability and my LAMMPS interface, Allegro was able to efficiently utilize 5120 GPUs and reach relevant speeds for a wide range of biomoleular structures.

Description

Other Available Sources

Keywords

Active learning, Equivariance, GPU, Machine learning, Molecular dynamics, Performance portability, Computational physics, Materials Science, Computer science

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories