In May of this year, the Supercomputing Conference (SC15) Test of Time Award Committee announced that the third winner of the prestigious Test of Time Award (ToTA) would be the paper “The NAS Parallel Benchmarks — Summary and Preliminary Results,” written by D. Bailey, E. Barszcz, J. Barton, D. Browning, R. Carter, L. Dagum, R. Fatoohi, P. Frederickson, T. Lasinski, R. Schreiber, H. Simon, V. Venkatakrishnan, and S. Weeratunga, and published at the SC91 conference.
The ToTA recognizes an outstanding paper that has appeared at the SC conference and has deeply influenced the high performance computing (HPC) discipline. It is a mark of historical impact and recognition that the paper has changed HPC trends. The award is also an incentive for researchers and students to send their best work to the SC Conference and a tool to understand why and how results last in the HPC discipline.
In order to qualify for award consideration, a paper must be at least 10 years old; in 2015, papers from 18 years of conferences were eligible. Anyone can nominate a paper except the author or co-author of the nominated paper, and one individual can nominate up to five papers from all eligible papers. These five papers can be published at any eligible year.
The 2015 ToTA paper’s authors will receive an award citation and a $1000 prize for the selected paper. In addition, the authors have been asked to give a presentation in a non-plenary session at SC15 in November.
In 1991, this team of computer scientists from the Numerical Aerodynamic Simulation Program — predecessor to the NASA Advanced Supercomputing (NAS) facility at Ames Research Center — unveiled the NAS Parallel Benchmarks (NPB), developed in response to the U.S. space agency’s increasing involvement with massively-parallel architectures and the need for a more rational procedure to select supercomputers to support agency missions. Then, existing benchmarks were usually specialized for vector computers, with shortfalls including parallelism-impeding tuning restrictions and insufficient problem sizes, making them inappropriate for highly-parallel systems.
The NPBs mimic computation and data movement of large-scale computational fluid dynamics (CFD) applications, and provide objective evaluation of parallel HPC architectures. The original NPBs featured “pencil-and-paper” specifications, which bypassed many difficulties associated with standard benchmarking methods for sequential or vector systems. The principal focus was in computational aerophysics, although most of these benchmarks have broader relevance for many real-world scientific computing applications.
The NPBs quickly became an industry standard and have since been implemented in many modern programming paradigms. Since 1991, research areas influenced by the NPBs have broadened to include network design, programming languages, compilers and tools. Google Scholar yields over 27,000 results for the NBPs, with about 7,400 citations since 2014.
“The paper and benchmark captures specifications and implementations of an important set of representative scientific codes,” said Jack Dongarra, SC15 Test of Time Award Chair from University of Tennessee, Knoxville. “The work is still actively used, and has inspired numerous sets of benchmarking codes that continue to drive research and development innovation.”
Today’s version of the benchmarks is alive and well, and continues to significantly influence NASA projects. It is used around the world by national labs, universities and computer vendors to evaluate sustained performance of highly-parallel supercomputers and the capability of parallel compilers and tools.
- View this this year’s winning paper
- For more information on the Test of Time Award, click here.
Previous Test of Time Winners
- SC14 : A Multi-level Algorithm for Partitioning Graphs | Bruce Hendrickson and Rob Leland, Sandia National Laboratories
Hendrickson, Affiliated Professor of Computer Science at University of New Mexico and Senior Manager for Extreme Scale Computing at Sandia National Laboratories, and Leland, Director of Computing Research at Sandia National Laboratories, were selected for their achievements in laying the inspirational groundwork for graph partitioning. Published in 1995 in the Proceedings of Supercomputing, “A Multi-level Algorithm for Partitioning Graphs” has had a tremendous impact on parallel computing, as graph partitioning lies at the heart of numerous scientific computations and is actively used to this day.
“The innovative methods so elegantly introduced in this paper represented the starting point for a collection of popular partitioning and load-balancing approaches, together with toolsets that have enabled scalable parallelism for countless applications over the past two decades,” says Ewing (“Rusty”) Lusk, Argonne Distinguished Fellow Emeritus at Argonne National Laboratory.
Multi-level graph partitioning is a method that partitions a series of smaller graphs and then propagates the result back to the original graph. Hendrickson and Leland were the first to develop this concept in their initial software, Chaco. Hendrickson and Leland’s work served as the basis for many widely used libraries in the HPC community, which was almost solely due to the publication of this paper.
“The idea of hierarchical graph partitioning, as introduced by Hendrickson and Leland, has proven essential, especially given the increased importance of unstructured meshes in science and engineering simulations,” says Kathy Yelick, Associate Laboratory Director of Computing Sciences at the Lawrence Berkeley National Laboratory and Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley. “Today, this methodology helps deal with the exponential increase in computational problem sizes and the increased scale of parallelizing these problems.”
You can read Hendrickson and Leland’s paper on IEEE’s Xplore Digital Library
- SC13: The Omega Project and Constraint Based Analysis Techniques in High Performance Computing | William Pugh, Professor Emeritus of Computer Science at the University of Maryland at College Park
The Omega test paper was one of the first to suggest general use of an exact algorithm for array data dependence analysis, which is the problem of determining if two array references are aliased. Knowing this is essential to knowing which loops can be run in parallel. Array data dependence is essentially the problem of determining if a set of affine constraints have an integer solution. This problem is NP-complete, but the paper described an algorithm that was both fast in practice and always exact. More important than the fact that the Omega test was exact was that it also could use arbitrary affine constraints (as opposed to many existing algorithms which could only use constraints occurring in certain pre-defined patterns), and could produce symbolic answers, rather than just yes/no answers. This work was the foundation of the Omega project and library, which significantly expanded the capabilities of the Omega test and added to the range of problems and domains that it could be applied to. The Omega library could calculate things such as actual data flow (rather than just aliasing), analyze and represent loop transformations, calculate array sections that needed to be communicated and generate loop nests.
About SC15
The SC15 Web site states: “HPC is transforming our everyday lives, as well as our not-so-ordinary ones. From nanomaterials to jet aircrafts, from medical treatments to disaster preparedness, and even the way we wash our clothes; the HPC community has transformed the world in multifaceted ways.
“For its 27th anniversary, the annual SC Conference will return to Austin, TX, a city that continues to develop new ways of engaging our senses and incubating technology of all types, including supercomputing. SC15 will yet again provide a unique venue for spotlighting HPC and scientific applications, and innovations from around the world.
“SC15 will bring together the international supercomputing community — an unparalleled ensemble of scientists, engineers, researchers, educators, programmers, system administrators and developers — for an exceptional program of technical papers, informative tutorials, research posters and Birds-of-a-Feather (BOF) sessions. The SC15 Exhibition Hall will feature exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies making their debut in Austin. No conference is better poised to demonstrate how HPC can transform both the everyday and the incredible.”