Examining the Future of Scientific HPC
Over 80 scientists, industry experts and computing professionals from 14 countries and five continents recently convened in Annecy, France to discuss the future of high performance computing, with a focus on scientific applications. Themes at the ninth biennial Computing in Atmospheric Sciences (CAS) workshop included the coming exaflop (1018 floating point operations per second) era, the need for international collaboration to get there, and comprehensive integrated system scalability. The workshop was organized and hosted by the National Center for Atmospheric Research (NCAR), and provided a venue for scientists, facilities managers, computing systems experts and industry vendors to discuss future needs and stimulate creative solutions in scientific computing.
“This meeting has an excellent balance between atmospheric science needs, computing center capabilities and vendor plans,” says Jairo Panetta, one of the conference presenters. “There is really no other meeting like it.” Panetta represents the Brazilian Center for Weather Forecasts and Climate Studies (CPTEC).
Keynote speakers highlighted a diverse set of topics, including physical facility challenges, data assimilation methods for atmospheric models, and computing partnerships on the local, regional and international scales. Collaboration, both scientific and technological, was a recurring topic as complexity of systems and the demands placed upon computing center management and users continues to grow.
Arndt Bode, a professor of informatics at Technische Universität München, discussed an international initiative to make a supercomputing ecosystem available for scientists and industrial applications all over Europe. The Partnership for Advanced Computing in Europe (PRACE) was started 2008 as a two-year project funded by the EU with 17 participating nations.
On the climate modeling and weather forecasting fronts, scientists showcased how increasing the resolution of simulations is improving the quality of their projections, and how nested simulations and linking various Earth system models is making them more accurate. These improvements are constantly pushing the limits of available computing capacity.
Technological presentations noted that the number of processors used in applications continues to increase, requiring increased parallelization as a primary means to faster applications. Also, numerical modeling grids are becoming more diverse, mainly to use the increased number of cores more effectively and efficiently, and new algorithms are showing linear scalability to the order of 100,000 cores.
The need for more processors translates into a need for more electrical power. Many existing computing facilities cannot meet these power demands within their current electrical and mechanical infrastructure and are planning to modify current facilities or construct new buildings. IBM engineer Don Grice projected an exaflop system by 2018 and discussed shifts in historical computing paradigms which will be required to keep the power requirement of an exaflop system as low as possible.
A special session at this year’s meeting covered facility development, examining the challenges in designing data centers for scientific research. “Building centers within zoning, power and budget constraints that can meet scientific computing needs is the challenge we all face today,” said Aaron Andersen, facilities manager at NCAR. “Learning from the experience of others is invaluable for us, as we work through our own data center design.”
A key conclusion of the conference was that the recent increase in the number of cores packaged in high-end computing systems has enabled transformational science. In the atmospheric sciences, weather forecasts and climate projections continue to become more accurate. Advances in these areas have a direct impact on the quality of life for the human population, and future discoveries depend upon continued availability and advancement of the most powerful computer systems.