The U.S. Department of Energy’s (DOE) Argonne National Laboratory (ANL) and Hewlett Packard Enterprise unveiled a new testbed supercomputer to prepare critical workloads for future exascale systems that will deliver up to 4x faster performance than Argonne’s current supercomputers.
The new system, which Argonne has named Polaris, will be built by HPE, and hosted and managed by the Argonne Leadership Computing Facility (ALCF), a U.S. DOE Office of Science User Facility. It will enable scientists and developers to test and optimize software codes and applications to tackle a range of AI, engineering and scientific projects planned for the forthcoming exascale supercomputer, Aurora, a collaboration between Argonne, Intel and HPE.
Polaris is designed with industry-leading high-performance computing (HPC) and AI solutions to advance investigations into society’s most complex and pressing issues, from understanding the biology of viruses to revealing the secrets of the universe. It will also augment Argonne’s ongoing efforts and achievements in areas such as clean energy, climate resilience and manufacturing.
In addition, Polaris will help researchers integrate HPC and AI with other experimental facilities, including Argonne’s Advanced Photon Source and the Center for Nanoscale Materials, both DOE Office of Science User Facilities.
“Polaris is well equipped to help move the ALCF into the exascale era of computational science by accelerating the application of AI capabilities to the growing data and simulation demands of our users,” said Michael E. Papka, director at the Argonne Leadership Computing Facility (ALCF). “Beyond getting us ready for Aurora, Polaris will further provide a platform to experiment with the integration of supercomputers and large-scale experiment facilities, like the Advanced Photon Source, making HPC available to more scientific communities. Polaris will also provide a broader opportunity to help prototype and test the integration of HPC with real-time experiments and sensor networks.”
Polaris: Argonne’s north star propels new era of exascale
Polaris will deliver approximately 44 petaflops of peak double precision performance and nearly 1.4 exaflops of theoretical artificial intelligence (AI) performance, which is based on mixed-precision compute capabilities.
It will be built using 280 HPE Apollo Gen10 Plus systems, which are HPC and AI architectures built for the exascale era and customized to include the following end-to-end solutions:
· Powerful compute to improve modeling, simulation and data-intensive workflows using 560 2nd and 3rd Gen AMD EPYC processors
· Supercharged AI capabilities to support data and image-intensive workloads while optimizing future exascale-level GPU-enabled deployments using 2240 NVIDIA A100 Tensor Core GPUs, making it ALCF’s largest GPU-based system to date
· Addressing demands for higher speed and congestion control for larger data-intensive and AI workloads with HPE Slingshot, the world’s only high performance Ethernet fabric designed for HPC and AI solutions. HPE Slingshot will also be featured in Argonne’s Aurora exascale system.
· Enabling fine-grained centralized monitoring and management for optimal performance with HPE Performance Cluster Manager, a system management software solution
“As we approach the exascale era, which will power a new age of insight and innovation, high performance computing (HPC) will play a critical role in harnessing data to take on the world’s most pressing challenges. Increasingly, the computational power and scale required to process artificial intelligence and machine learning data sets can only be delivered through HPC systems, and HPE uniquely provides a powerful, software-driven platform capable of tackling complex scientific data and simulations,” said Justin Hotard, senior vice president and general manager, HPC and Mission Critical Solutions at HPE. “The U.S. Department of Energy’s (DOE) Office of Science continues to make tremendous impacts in accelerating scientific and engineering breakthroughs using HPC. Our latest collaboration with the DOE’s Argonne National Laboratory to build and deliver the Polaris testbed supercomputer will further its mission by preparing users for the magnitude of technological advancement that exascale systems will deliver.”
Polaris prepares scientists to tackle exascale-level problems
Initially, Polaris will be dedicated to research teams participating in initiatives such as the DOE’s Exascale Computing Project and the ALCF’s Aurora Early Science Program, which are already tackling complex issues such as:
· Advancing cancer treatment by accelerating research in understanding the role of biological variables in a tumor cell’s path by advancing the use of data science to drive analysis of extreme-scale fluid-structure-interaction simulations; and predicting drug response to tumor cells by enabling billions of virtual drugs to be screened from single to numerous combinations, while predicting their effects on tumorous cells.
· Advancing the nation’s energy security, while minimizing climate impact with biochemical research through the NWChemEx project, funded by the DOE’s Office of Science Biological and Environmental Research. Researchers are solving the molecular problems in biofuel production by developing models that optimize feedstock to produce biomass and analyze the process of converting biomass materials into biofuels.
· Expanding the boundaries of physics with particle collision research in the ATLAS experiment, which uses the Large Hadron Collider (LHC), the world’s most powerful particle accelerator, sited at CERN, near Geneva, Switzerland. Scientists study the complex products from particle collisions in very large detectors to deepen our understanding of the fundamental constituents of matter, including the search for evidence of dark matter.
User communities within the DOE’s Exascale Computing Project will also use Polaris for optimizing engineering tasks for Argonne’s Aurora, which includes scaling of combined CPU- and GPU-enabled systems and the complex integration of workflows combining modeling, simulation, AI and other data-intensive components. The delivery and installation of Polaris is scheduled to begin this month. It will go into use starting early 2022 and will be open to the broader HPC community in spring of 2022 to prepare workloads for the next generation of DOE’s high performance computing resources.