Racing to Perform World-class Research
Rob Farber
![]() |
High performance computing developers are the race car drivers of the supercomputing world. Basically, if a piece of hardware in the supercomputer is not performing useful work, the developer is not doing their job. This puts high-performance computing at odds with the designers and engineers who are trying to reduce power consumption and environmental impact of these systems.
As I noted in my May 2007 Scientific Computing column, “Power and Cooling: The HPC Brick Wall” the environmental impact of large data centers and supercomputers must be considered as part of the overall design process. For example, search engine companies now speak in terms of watts per search and are evaluated according to the amount of CO2 produced per search. CO2 generation is a measure of atmospheric pollution and is believed to be a significant contributor to global warming.
Jonathan Leake and Richard Woods performed an in-depth study for the UK Sunday Times in January 2009 to understand the amount of CO2 Google produces per search. While searches that take less than one second produce about 0.2g of CO2, longer searches can generate 7g of CO2 — as much as is produced when boiling a kettle of water. These numbers provide a rough idea of how much CO2 both Google and Bing can produce as a byproduct of all the search requests they handle world-wide on a daily basis.
I applaud the new processor designs that power-down unused portions of the processor chip. According to AMD, most commercial servers run at roughly 10 to 15 percent efficiency, so the power saving abilities of these new processors can significantly reduce costs and environmental impact. By selecting products from integrators that enable these power-saving capabilities, both consumers and data-centers will see the benefit of these new chips.
Just like data centers, large supercomputers consume many mega-watts of power. The Oakridge National Laboratory Jaguar supercomputer, for example, is provisioned with five mega-watts of power. Unlike data-centers, these large supercomputers are national scientific resources that tend to run many applications with 80 percent or greater efficiency. This demonstrates that the scientists and engineers who are racing to perform world-class research are doing their jobs, as they have packed as much science into the supercomputer as is possible. My standing joke is that the fine print for HPC job descriptions should contain the following, “must be able to cram large scientific problems into very small supercomputers.”
While a 20-percent power saving for a five megawatt supercomputer is significant, it is not enough to ameliorate the power consumption of future systems that will be 10 or 100 times more powerful. For this reason, alternative architectures, such as graphics processors are very interesting, as they deliver very high floating-point performance per watt of power consumed. According to the Scidac Review, hybrid architectures offer a way to reduce the power requirements for an exascale supercomputer to 20 megawatts instead of 100 to 200 megawatts. The latter power consumption would result in a roughly $100 million power bill. With such operating costs, a standing joke is that the power companies should donate the exascale supercomputers. All jokes aside, significant steps have been taken to build large hybrid CPU/GPU supercomputers, such as the Chinese Nebulae and Tianhe-1 (meaning River in Sky) supercomputers that contain thousands of NVIDIA and AMD GPGPUs.
While the technology improvements in computation have been spectacular, conventional power-generating capabilities have lagged behind. In the HPC world, don’t forget that some inventive developer will “cram a bigger problem” into our supercomputers to make each piece of the hardware deliver great science — not use less power. As Jeff Goodell observed in his book, Big Coal: The Dirty Secret Behind America’s Energy Future, “coal was supposed to be the engine of the industrial revolution, not the Internet revolution.” A corollary is that 21st century leadership supercomputing also should not depend on CO2-generating 19th Century energy technology.
References
1. “The HPC Brick Wall”: www.scientificcomputing.com/the-hpc-brick-wall.aspx
2. “Revealed: the environmental impact of Google searches”: http://technology.timesonline.co.uk/tol/news/tech_and_web/article5489134.ece
3. “Building a Power Efficient HPC System”: http://blogs.amd.com/work/2010/02/11/building-a-power-efficient-hpc-system
4. SciDac Review, “Paving the Roadmap to Exascale”: www.scidacreview.org/1001/html/hardware.html
Rob Farber is a senior research scientist at Pacific Northwest National Laboratory. He may be reached at [email protected].