On July 29, 2015, President Obama issued an Executive Order establishing the National Strategic Computing Initiative (NSCI) to ensure the United States continues to lead in the development and deployment of cutting-edge computing systems, which are essential to economic competitiveness, scientific discovery and national security. Through a combination of processing capability and storage capacity, high-performance computing (HPC) systems can solve computational problems that are beyond the capability of small- to medium-scale systems and are vital to the Nation’s interests in science, medicine, engineering, technology and industry.
In a blog on the Office of Science and Technology Policy Web site, Tom Kalil, the Deputy Director for Technology and Innovation at the White House Office of Science and Technology Policy, and Jason Miller, the Deputy Assistant to the President and Deputy Director of the National Economic Council, explain that “this coordinated research, development and deployment strategy will draw on the strengths of departments and agencies to move the Federal government into a position that sharpens, develops and streamlines a wide range of new 21st century applications. It is designed to advance core technologies to solve difficult computational problems and foster increased use of the new capabilities in the public and private sectors.”
An effort to create a cohesive, multi-agency strategic vision and Federal investment strategy in HPC, the NSCI strategy will be executed in collaboration with industry and academia, maximizing the benefits of HPC for the United States. The NSCI will spur the creation and deployment of computing technology at the leading edge, helping to advance Administration priorities for economic competiveness, scientific discovery and national security.
• CONFERENCE AGENDA ANNOUNCED: The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason. Learn more.
Over the next decade the goal is to build supercomputers capable of one exaflop (1018 operations per second). It is also important to note that HPC in this context is not just about the speed of the computing device itself. As the President’s Council of Advisors on Science and Technology has concluded, high-performance computing “must now assume a broader meaning, encompassing not only flops, but also the ability, for example, to efficiently manipulate vast and rapidly increasing quantities of both numerical and non-numerical data.”
The National Strategic Computing Initiative has five strategic themes:
- Create systems that can apply exaflops of computing power to exabytes of data.
The historic focus on HPC has been on computers designed to simulate models of physical systems, such as flying aircraft, colliding automobiles, interacting molecules, evolving weather and climate and seismic activity. Their power is often expressed in flops (floating-point operations per second), with the NSCI target being one exaflop (1018 operations per second). In the last 10 years, a new class of HPC system has emerged to collect, manage and analyze vast quantities of data arising from diverse sources, such as Internet Web pages and scientific instruments. These “big data” systems will approach scales measured in exabytes (1018 bytes).
By combining the computing power and the data capacity of these two classes of HPC systems, deeper insights can be gained through new approaches that combine simulation with actual data. For example, simulations of weather could be coupled with real data from satellites and other sensors. This combination of capabilities will enable data analytic methods that require large degrees of numeric processing. This could lead, for example, to tools that assist radiologists in detecting cancer from X-ray images, where this diagnostic ability is learned automatically by analyzing large collections of medical data. Achieving this combination will require finding a convergence between the hardware and software technology for these two classes of systems.
The NSCI seeks to drive the convergence of compute-intensive and data-intensive systems, while also increasing performance overall. Government agencies will work with computer vendors to create advanced systems for applications involving combinations of modeling, simulation, and data analytics. Government research programs will develop new approaches to hardware, system architectures, and programming tools. Government agencies will also foster the transition of these technologies from research to deployment.
- Keep the United States at the forefront of HPC capabilities.
The United States has been the leader in building large-scale computing systems and in applying them for modeling and simulation. On the data-intensive side, the value of scaling up to collect and process massive amounts of data was first recognized and exploited by U.S. companies. Other countries have undertaken major initiatives to create their own high-performance computer technology. Sustaining this capability requires supporting a complete ecosystem of users, vendor companies, software developers, and researchers.
The Nation must preserve its leadership role in creating HPC technology and using it across a wide range of applications. The Department of Energy (DOE) has developed a program to deliver exascale computers — ones achieving exaflop performance on important applications. They have also identified a number of challenges in reaching exascale and will support research to overcome these challenges. Other agencies will work with DOE to ensure that their missions benefit from these next generation capabilities, which in turn will help sustain the HPC ecosystem.
- Improve HPC application developer productivity.
Current HPC systems are very difficult to program, requiring careful measurement and tuning to get maximum performance on the targeted machine. Shifting a program to a new machine can require repeating much of this process, and it also requires making sure the new code gets the same results as the old code. The level of expertise and effort required to develop HPC applications poses a major barrier to their widespread use.
Government agencies will support research on new approaches to building and programming HPC systems that make it possible to express programs at more abstract levels and then automatically map them onto specific machines. In working with vendors, agencies will emphasize the importance of programmer productivity as a design objective. Agencies will foster the transition of improved programming tools into actual practice, making the development of applications for HPC systems no more difficult than it is for other classes of large-scale systems.
- Make HPC readily available.
Right now, there are many companies and many research projects that could benefit from HPC technology, but they lack expertise and access. Many scientists and engineers also lack training in the concepts and tools for modeling and simulation and data analytics.
Agencies will work with both computer manufacturers and cloud providers to make HPC resources more readily available so that scientific researchers in both the public and private sectors have ready access. Agencies will sponsor the development of educational materials for next generation HPC systems, covering fundamental concepts in modeling, simulation and data analytics, as well as the ability to formulate and solve problems using advanced computing.
- Establish hardware technology for future HPC systems.
Computer system performance has increased at a steady rate over the past 70 years largely through improvements in the underlying hardware technology, but semiconductor technology is reaching scaling limits. There are many possible successors to current semiconductor technology, but none that are close be being ready for deployment.
A comprehensive research program is required to ensure continued improvements in HPC performance beyond the next decade. The Government must sustain fundamental, precompetitive research on future hardware technology to ensure ongoing improvements in high performance computing.
“By strategically investing now,” Kalil and Miller say, “we can prepare for increasing computing demands and emerging technological challenges, building the foundation for sustained U.S. leadership for decades to come, while also expanding the role of high-performance computing to address the pressing challenges faced across many sectors.