From the Foreword:
"The authors of the chapters in this book are the pioneers who will explore the exascale frontier. The path forward will not be easy... These authors, along with their colleagues who will produce these powerful computer systems will, with dedication and determination, overcome the scalability problem, discover the new algorithms needed to achieve exascale performance for the broad range of applications that they represent, and create the new tools needed to support the development of scalable and portable science and engineering applications. Although the focus is on exascale computers, the benefits will permeate all of science and engineering because the technologies developed for the exascale computers of tomorrow will also power the petascale servers and terascale workstations of tomorrow. These affordable computing capabilities will empower scientists and engineers everywhere."
— Thom H. Dunning, Jr., Pacific Northwest National Laboratory and University of Washington, Seattle, Washington, USA
"This comprehensive summary of applications targeting Exascale at the three DoE labs is a must read."
— Rio Yokota, Tokyo Institute of Technology, Tokyo, Japan
"Numerical simulation is now a need in many fields of science, technology, and industry. The complexity of the simulated systems coupled with the massive use of data makes HPC essential to move towards predictive simulations. Advances in computer architecture have so far permitted scientific advances, but at the cost of continually adapting algorithms and applications. The next technological breakthroughs force us to rethink the applications by taking energy consumption into account. These profound modifications require not only anticipation and sharing but also a paradigm shift in application design to ensure the sustainability of developments by guaranteeing a certain independence of the applications to the profound modifications of the architectures: it is the passage from optimal performance to the portability of performance. It is the challenge of this book to demonstrate by example the approach that one can adopt for the development of applications offering performance portability in spite of the profound changes of the computing architectures."
— Christophe Calvin, CEA, Fundamental Research Division, Saclay, France
"Three editors, one from each of the High Performance Computer Centers at Lawrence Berkeley, Argonne, and Oak Ridge National Laboratories, have compiled a very useful set of chapters aimed at describing software developments for the next generation exa-scale computers. Such a book is needed for scientists and engineers to see where the field is going and how they will be able to exploit such architectures for their own work. The book will also benefit students as it provides insights into how to develop software for such computer architectures. Overall, this book fills an important need in showing how to design and implement algorithms for exa-scale architectures which are heterogeneous and have unique memory systems. The book discusses issues with developing user codes for these architectures and how to address these issues including actual coding examples.’
— Dr. David A. Dixon, Robert Ramsay Chair, The University of Alabama, Tuscaloosa, Alabama, USA
Table of Contents
Chapter 1 Portable Methodologies for Energy Optimization on Large-Scale Power-Constrained Systems
Kevin J. Barker and Darren J. Kerbyson
Chapter 2 Performance Analysis and Debugging Tools at Scale
Scott Parker, John Mellor-Crummey, Dong H. Ahn, Heike Jagode, Holger Brunst, Sameer Shende, Allen D. Malony, David Lecomber, John V. DelSignore, Jr., Ronny Tschüter, Ralph Castain, Kevin Harms, Philip Carns, Ray Loy, and Kalyan Kumaran
Chapter 3 Exascale Challenges in Numerical Linear and Multilinear Algebras
Dmitry I. Lyakh and Wayne Joubert
Chapter 4 Exposing Hierarchical Parallelism in the FLASH Code for Supernova Simulation on Summit and Other Architectures
Thomas Papatheodore and O. E. Bronson Messer
Chapter 5 NAMD: Scalable Molecular Dynamics Based on the Charm++ Parallel Runtime System
Bilge Acun, Ronak Buch, Laxmikant Kale, and James C. Phillips
Chapter 6 Developments in Computer Architecture and the Birth and Growth of Computational Chemistry
Wim Nieuwpoort and Ria Broer
Chapter 7 On Preparing the Super Instruction Architecture and Aces4 for Future Computer Systems
Jason Byrd, Rodney Bartlett, and Beverly A. Sanders
Chapter 8 Transitioning NWChem to the Next Generation of Manycore Machines
Eric J. Bylaska, Edoardo Aprà, Karol Kowalski, Mathias Jacquelin, Wibe A. de Jong, Abhinav Vishnu, Bruce Palmer, Jeff Daily, Tjerk P. Straatsma, Jeff R. Hammond, and Michael Klemm
Chapter 9 Exascale Programming Approaches for Accelerated Climate Modeling for Energy
Matthew R. Norman, Azamat Mametjanov, and Mark Taylor
Chapter 10 Preparing the Community Earth System Model for Exascale Computing
John M. Dennis, Christopher Kerr, Allison H. Baker, Brian Dobbins, Kevin Paul, Richard Mills, Sheri Mickelson, Youngsung Kim, and Raghu Kumar
Chapter 11 Large Eddy Simulation of Reacting Flow Physics and Combustion
Joseph C. Oefelein and Ramanan Sankaran
Chapter 12 S3D-Legion: An Exascale Software for Direct Numerical Simulation of Turbulent Combustion with Complex Multicomponent Chemistry
Sean Treichler, Michael Bauer, Ankit Bhagatwala, Giulio Borghesi, Ramanan Sankaran, Hemanth Kolla, Patrick S. McCormick, Elliott Slaughter, Wonchan Lee, Alex Aiken, and Jacqueline Chen
Chapter 13 Data and Work_ow Management for Exascale Global Adjoint Tomography
Matthieu Lefebvre, Yangkang Chen, Wenjie Lei, David Luet, Youyi Ruan, Ebru Bozdaĝ, Judith Hill, Dimitri Komatitsch, Lion Krischer, Daniel Peter, Norbert Podhorszki, James Smith, and Jeroen Tromp
Chapter 14 Scalable Structured Adaptive Mesh Refinement with Complex Geometry
Brian Van Straalen, David Trebotich, Andrey Ovsyannikov, and Daniel T. Graves
Chapter 15 Extreme Scale Unstructured Adaptive CFD for Aerodynamic Flow Control
Kenneth E. Jansen, Michel Rasquin, Jed Brown, Cameron Smith, Mark S. Shephard, and Chris Carothers
Chapter 16 Lattice Quantum Chromodynamics and Chroma
Bálint Joó, Robert G. Edwards, and Frank T. Winter
Chapter 17 PIC Codes on the Road to Exascale Architectures
Henri Vincenti, Mathieu Lobet, Remi Lehe, Jean-Luc Vay, and Jack Deslippe
Chapter 18 Extreme-Scale De Novo Genome Assembly
Evangelos Georganas, Steven Hofmeyr, Leonid Oliker, Rob Egan, Daniel Rokhsar, Aydin Buluc, and Katherine Yelick
Chapter 19 Exascale Scientific Applications: Programming Approaches for Scalability, Performance, and Portability: KKRnano
Paul F. Baumeister, Marcel Bornemann, Dirk Pleiter, and Rudolf Zeller
Chapter 20 Real-Space Multiple-Scattering Theory and Its Applications at Exascale
Markus Eisenbach and Yang Wang
Chapter 21 Development of QMCPACK for Exascale Scientific Computing
Anouar Benali, David M. Ceperley, Ed D’Azevedo, Mark Dewing, Paul R. C. Kent, Jeongnim Kim, Jaron T. Krogel, Ying Wai Li, Ye Luo, Tyler McDaniel, Miguel A. Morales, Amrita Mathuria, Luke Shulenburger, and Norm M. Tubman
Chapter 22 Preparing an Excited-State Materials Application for Exascale
Jack Deslippe, Felipe H. da Jornada, Derek Vigil-Fowler, Taylor Barnes, Thorsten Kurth, and Steven G. Louie
Chapter 23 Global Gyrokinetic Particle-in-Cell Simulation
William Tang and Zhihong Lin
Chapter 24 The Fusion Code XGC: Enabling Kinetic Study of Multiscale Edge Turbulent Transport in ITER
Eduardo D’Azevedo, Stephen Abbott, Tuomas Koskela, Patrick Worley, Seung-Hoe Ku, Stephane Ethier, Eisung Yoon, Mark Shephard, Robert Hager, Jianying Lang, Jong Choi, Norbert Podhorszki, Scott Klasky, Manish Parashar, and Choong-Seock Chang
Dr. T. P. Straatsma is the Group Leader for Scientific Computing in the National Center for Computational Sciences, a division that houses the Oak Ridge Leadership Computing Facility, at Oak Ridge National Laboratory, and Adjunct Faculty member in the Chemistry Department of the University of Alabama in Tuscaloosa. He earned his Ph.D. in Mathematics and Natural Sciences from the University of Groningen, the Netherlands. After a postdoctoral associate appointment, followed by a faculty position in the Department of Chemistry at the University of Houston, he moved to Pacific Northwest National Laboratory (PNNL), as co-developer of the NWChem computational chemistry software, established a program in computational biology, and was group leader for computational biology and bioinformatics. Straatsma served as Director for the Extreme Scale Computing Initiative at PNNL, focusing on developing science capabilities for emerging petascale computing architectures. He was promoted to Laboratory Fellow, the highest scientific rank at the Laboratory.
In 2013 he joined Oak Ridge National Laboratory, where, in addition to being Group Leader for Scientific Computing, he is the Lead for the Center for Accelerated Application Readiness, and Lead for the Applications Working Group in the Institute for Accelerated Data Analytics and Computing, focusing on preparing scientific applications for the next generation pre-exascale and exascale computer architectures.
Straatsma has been a pioneer in the development, efficient implementation and application of advanced modeling and simulation methods as key scientific tools in the study of chemical and biomolecular systems, complementing analytical theories and experimental studies. His research focuses on the development of computational techniques that provide unique and detailed atomic level information that is difficult or impossible to obtain by other methods, and that contributes to the understanding of the properties and function of these systems. In particular, his expertise is in the evaluation of thermodynamic properties from large scale molecular simulations, having been involved since the mid-1980s, in the early development of thermodynamic perturbation and thermodynamic integration methodologies. His research interests also include the design of efficient implementations of these methods on modern, complex computer architectures, from the vector processing supercomputers of the 1980s to the massively parallel and accelerated computer systems of today. Since 1995, he is a core developer of the massively parallel molecular science software suite NWChem and responsible for its molecular dynamics simulation capability. Straatsma has co-authored nearly 100 publications in peer-reviewed journals and conferences, was the recipient of the 1999 R&D 100 Award for the NWChem molecular science software suite, and was recently elected Fellow of the American Association for the Advancement of Science.
Katie B. Antypas is the Data Department Head at the National Energy Research Scientific Computing (NERSC) Center, which includes the Data and Analytics Services Group, Data Science Engagement Group, Storage Systems Group and Infrastructure Services Group. The Department’s mission is to pioneer new capabilities to accelerate large-scale data-intensive science discoveries as the Department of Energy Office of Science workload grows to include more data analysis from experimental and observational facilities such as light sources, telescopes, satellites, genomic sequencers and particle colliders. Katie is also the Project Manager for the NERSC-8 system procurement, a project to deploy NERSC's next generation HPC supercomputer in 2016, named Cori, a system comprised of the Cray interconnect and Intel Knights Landing manycore processor. The processor features on-package, high bandwidth memory, and more than 64 cores per node with 4 hardware threads each. These technologies offer applications great performance potential, but will require users to make changes to applications in order to take advantage of multi-level memory and a large number of hardware threads. To address this concern, Katie and the NERSC-8 team launched the NERSC Exascale Science Applications Program (NESAP), an initiative to prepare approximately 20 application teams for the Knights Landing architecture through close partnerships with vendors, science application experts and performance analysts.
Katie is an expert in parallel I/O application performance, and for the past 6 years has given a parallel-I/O tutorial at the SC conference. She also has expertise in parallel application performance, HPC architectures, and HPC user support and Office of Science user requirements. Katie is also a PI on a new ASCR Research Project, “Science Search: Automated MetaData Using Machine Learning”. Before coming to NERSC, Katie worked at the ASC Flash Center at the University of Chicago supporting the FLASH code, a highly scalable, parallel, adaptive mesh refinement astrophysics application. She wrote the parallel I/O modules in HDF5 and Parallel-NetCDF for the code. She as an M.S. in Computer Science from the University of Chicago and a bachelors in Physics from Wellesley College.
Timothy J. Williams is Deputy Director of Science at the Argonne Leadership Computing Facility, at Argonne National Laboratory. He works in the Catalyst team—computational scientists who work with the large-scale projects using ALCF supercomputers. Tim manages the Early Science Program. The goal of the ESP is preparing a set of scientific applications for early, pre-production use of next-generation computers such as ALCF’s most recent Cray-Intel system based on second generation Xeon Phi processors, Theta; and our forthcoming pre-exascale system, Aurora, based on third generation Xeon Phi. Tim received his BS in Physics and Mathematics from Carnegie Mellon University in 1982; he received PhD in Physics in 1988 from the College of William and Mary, focusing on numerical study of a statistical turbulence theory using Cray vector supercomputers. Since 1989, he has specialized in the application of large-scale parallel computation to various scientific domains, including particle-in-cell plasma simulation for magnetic fusion, contaminant transport in groundwater flows, global ocean modeling, and multimaterial hydrodynamics. He spent eleven years in research at Lawrence Livermore National Laboratory and Los Alamos National Laboratory. In the early 1990s, Tim was part of the pioneering Massively Parallel Computing Initiative at LLNL, working on plasma PIC simulations and dynamic alternating direction implicit (ADI) solver implementations on the BBN TC2000 computer. In the late 1990s, he worked at Los Alamos’ Advanced Computing Laboratory with a team of scientists developing the POOMA (Parallel Object Oriented Methods and Applications) framework—a C++ class library encapsulating efficient parallel execution beneath high-level data-parallel interfaces designed for scientific computing. Tim then spent nine years as a quantitative software developer for the financial industry, at Morgan Stanley in New York focusing on fixed-income securities and derivatives, and at Citadel in Chicago focusing most recently on detailed valuation of subprime mortgage-backed securities. Tim returned to computational science at Argonne in 2009.
Featured Author Profiles
"Numerical simulation is now a need in many fields of science, technology, and industry. The complexity of the simulated systems coupled with the massive use of data makes the HPC essential to move towards predictive simulations. Advances in computer architecture have so far permitted scientific advances, but at the cost of continually adapting algorithms and applications. The next technological breaks force us to rethink the applications by taking energy consumption into account. These profound modifications require not only anticipation and sharing but also a paradigm shift in application design to ensure the sustainability of developments by guaranteeing a certain independence of the applications to the profound modifications of the architectures: it is the passage from optimal performance to the portability of performance. It is the challenge of this book to demonstrate by example the approach that one can adopt for the development of applications offering a portability of the performances in spite of the profound changes of the computing architectures."
— Christophe Calvin, CEA, Fundamental Research Division, Scalay, France
"This comprehensive summary of applications targeting Exascale at the three DoE labs is a must read."
— Rio Yokota, Tokyo Institute of Technology, Tokyo, Japan
"Three editors, one from each of the High Performance Computer Centers at Lawrence Berkeley, Argonne, and Oak Ridge National Laboratories, have complied a very useful set of chapters aimed at describing software developments for the next generation exa-scale computers. Such a book is needed for scientists and engineers to see where the field is going and how they will be able to exploit such architectures for their own work. The book will also benefit students as it provides insights into how to develop software for such computer architectures. Overall, this book fills an important need in showing how to design and implement algorithms for exa-scale architectures which are heterogeneous and have unique memory systems. The book discusses issues with developing user codes for these architectures and how to address these issues including actual coding examples.’ — Dr. David A. Dixon, Robert Ramsay Chair, The University of Alabama, USA