Scientists at ETH Zurich and IBM Research, in collaboration with the Technical University of Munich (TUM) and the Lawrence Livermore National Laboratory (LLNL), have set a new record in supercomputing in fluid dynamics using 6.4 million threads on LLNL’s 96-rack Sequoia IBM BlueGene/Q, one of the fastest supercomputers in the world.
The simulations resolved unique phenomena associated with clouds of collapsing bubbles which have several potential applications including: Improving the design of high pressure fuel injectors and propellers; shattering kidney stones using the high pressure of the collapsing bubbles; and emerging therapeutic modality for cancer treatment by using bursting bubbles to destroy tumorous cells and precise drug delivery.
The team of scientists performed the largest simulation ever in fluid dynamics by employing 13 trillion cells and reaching an unprecedented, for flow simulations, 14.4 Petaflop sustained performance on Sequoia— 73 percent of the supercomputer’s theoretical peak.
The simulations resolved 15,000 bubbles, a 150-fold improvement over previous research and a 20-fold reduction in time to solution. These are crucial improvements, which pave the way for the investigation of a complex phenomenon called cloud cavitation collapse. This occurs when vapor cavities or bubbles form in a liquid due to changes in pressure and when the bubbles implode they can generate damaging shockwaves, which can be harnessed for applications in healthcare and industrial technology.
The violent and short time scales of this process have made its quantitative understanding elusive for experimentalists and computational scientists. And while supercomputers have always been considered as a solution, the large scale flow simulations have not been effective on massively parallel architectures.
“In the last 10 years we have addressed a fundamental problem of computational science: the ever increasing gap of hardware capabilities and their effective utilization to solve engineering problems,” said Petros Koumoutsakos, director of the Computational Science and Engineering Laboratory at ETH Zurich, who led this project.
He added: “We have based our developments on finite volume methods, perhaps the most established and widespread method for engineering flow simulations. We have also invested significant effort in designing software that takes advantage of today’s parallel computer architectures. It is the proper integration of computer science and numerical mathematics that enables such advances.”
“We were able to accomplish this using an array of pioneering hardware and software features within the IBM BlueGene/Q platform that allowed the fast development of ultra-scalable code which achieves an order of magnitude better performance than previous state-of-the-art,” said Alessandro Curioni, head of mathematical and computational sciences department at IBM Research- Zurich. “While the Top500 list will continue to generate global interest, the applications of these machines and how they are used to tackle some of the world’s most pressing human and business issues more accurately quantifies the evolution of supercomputing.”
These simulations are one to two orders of magnitude faster than any previously reported flow simulation. The last major achievement was earlier this year by a team at Stanford University, which broke the one million core barrier, also on Sequoia.
This significant achievement is one of six finalists for the 2013 Gordon Bell Prize, awarded by the Association for Computing Machinery this week at Supercomputing ’13 (SC13).