In the research projects it conducts and in the way it conducts research, the National Renewable Energy Laboratory (NREL) in Golden, CO, lives out the true meaning of its energy-efficient creed. In this way, NREL, one of the U.S. Department of Energy’s national laboratories, is that rarest of entities: a preacher of virtue that incorporates virtue into its daily life.
NREL’s sincerity of purpose begins with its Intel-based Peregrine supercomputer and the ultra-efficient data center in which it resides. This supercomputer is operated as a computational user facility supporting the mission of U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy. Peregrine is used by scientists and engineers advancing energy efficiency and renewable energy research, including such areas as bioenergy, wind, solar and energy system integration. The innovative, liquid-cooled system and ultra-efficient data center not only require a fraction of the electricity needed to cool conventional HPC systems, they also serve a central role in the highly efficient heating system at the facility’s office and laboratory space, saving approximately one million dollars in annual energy costs, relative to a typical energy-efficient data center.
NREL is pioneering new integrated strategies and techniques for cutting the energy use by a major consumer of electrical power: the data center. As systems grow larger and hotter, the day is approaching when electrical demands from data centers — in the form of air conditioning and fans — will outstrip some municipalities’ ability to deliver power. Unless, that is, there’s a change in the way systems and data centers are cooled.
NREL is leading this change. Adopting the mission to build the world’s most energy-efficient data center six years ago, NREL embarked on achieving that goal using a holistic “chips to bricks” approach that would utilize both the bytes of information and the BTUs of heat generated by the computers. The lab wanted not only world-class compute power but also a supercomputing system and data center that consume a fraction of the electrical power of HPC systems on the market. The result is Peregrine, launched in 2013, which utilizes more than 30,000 Intel Xeon processor cores and 576 Intel Xeon Phi co-processors. Peregrine was recently expanded with an additional 1152 nodes (each with 64 GB of memory), delivering overall peak performance of 2.24 petaflops (more than two quadrillion calculations per second). The nodes are connected using 56 Gb/sec InfiniBand. Peregrine runs the Linux operating system and has a dedicated Lustre file system with about 2.25 petabytes of online storage.
Beyond compute power, Intel processors contribute to Peregrine’s energy efficiency. Intel’sw latest generation of chip technology is “the most power-efficient parallel processor in the world,” according to the Green 500, an organization that assesses supercomputer energy consumption.
Researchers are using Peregrine to
- conduct research to reduce the cost of electricity generated from wind energy. Peregrine is utilized to process large-scale wind resource data sets, conduct simulations of whole wind plants composed of multiple turbines for optimized wind energy capture and analyze turbine connectivity with the utility grid. With Peregrine, researchers can now simulate the upwind and downwind impacts of multi-turbine arrays on wind farms. Prior to Peregrine, detailed simulations were limited to single blades and turbines.
- advance solar energy technologies with research on photovoltaics (PV), solar radiation systems integration and solar thermal, including theory and modeling materials design, optimal siting of systems, integration of PV systems into the grid, development of parabolic trough technology for electricity generation and technologies that lower the cost of solar water heating systems. Peregrine is enabling scientists to simulate material with “super cell” models of 1000 atoms or more, a significant increase in simulation scale and detail.
- develop and test novel approaches to biofuels while reducing the performance risks and improve their commercial viability, including processes for converting cellulosic biomass — plant matter such as trees, grasses, agricultural residue, algae and other biological material — into fuels and chemicals.
- lead efforts in developing new technical solutions to improve the energy efficiency of resident and commercial buildings and to accelerate the integration of renewable energy technologies with buildings.
Beyond its support of key clean energy research, Peregrine plays a central role in another aspect of NREL’s mission: serve as a leader and a showcase for energy efficiency at its own facility that other organizations, in both the private and public sectors, can use as a model. NREL teamed with Intel and Hewlett-Packard Enterprise in the use of Peregrine as the primary source of heat for NREL’s 182,000-square-foot Energy Systems Integration Facility (ESIF).
As supercomputers scale up by orders of magnitude, energy consumption and heat dissipation put enormous stress on HPC systems, the facilities in which they are housed, and organizational operating budgets. According to NREL, Peregrine’s direct component liquid cooling system allows much greater performance density, cutting energy consumption in half, and creating efficiencies with other building energy systems.
“Cooling supercomputers with the usual way, with fans and so forth, is like putting a glass of water in a room and turning up the air conditioning in your house to cool it,” said Steve Hammond, Director of NREL’s Computational Science Center. “Conventional data center heat rejection techniques are incredibly wasteful.”
“The dry disconnect warm water cooling infrastructure of the HPE Apollo 8000 is a patented feature that provides the benefits of water cooling while overcoming the barriers to practical, efficient deployment and operation; effectively eliminating the risk found in existing water-cooled supercomputers. This allows for an air-cooled-like maintenance process with the ability to access server trays without interrupting system operation. This unique feature from HPE is an example of innovative solutions that are helping to advance the performance, density and efficiency of supercomputing.” – Bill Mannel, HPE
The liquid that circulates through the Peregrine system is heated to 95 to 110°F. About 95 percent of the heat generated by Peregrine is captured directly to liquid. Rather than rejecting the waste heat and separately heating the research facility office and laboratory space, the heat is captured and circulated through the heating system of the ESIF. This waste heat use saves about $200,000 a year that would otherwise have to be paid to separately heat the offices and laboratories.
The results are impressive. The NREL campus central plant boiler is normally turned on in September as Colorado’s early autumn sets in. This past year, that was postponed a month because of the ability to use Peregrine’s waste heat. In addition, the heat is piped under the sidewalks and plaza areas on the NREL campus around the ESIF, melting snow and ice in the winter. Bottom line: NREL has demonstrated a power usage effectiveness (PUE) rating far better than their design goal of 1.06, typically achieving a PUE between 1.03 and 1.04. Developed by The Green Grid consortium, PUE is the ratio of total amount of energy used by a data center facility to the energy delivered to computing equipment.
There were plenty of skeptics about NREL’s plans for Peregrine and for the holistic energy efficiency strategy for the ESIF, and there were concerns about risks associated with using liquid to cool an HPC system. But Hammond said Peregrine has performed better and has delivered a lower failure rate than expected because water is a superior way to eject waste heat, providing a more thermally stable systems environment. The efficiencies are impressive. For example, in conventional air-cooled computer systems that consume 20 megawatts of power, six megawatts are typically necessary for rejection of waste heat. But with Peregrine’s water-cooling system that number is cut to 1.0 megawatt, a factor of six reduction.
The ultra-efficient liquid cooled supercomputer earned NREL and Hewlett-Packard Enterprise a 2014 R&D 100 Award, helped the ESIF earn R&D Magazine’s 2014 Laboratory of the Year award and the Energy Department’s 2015 Sustainability Award. NREL’s showcase facility is being used to evangelize data center energy efficiency, hosting an average of two groups per week that tour the data center and study NREL’s green techniques.
“The trend in computing is toward higher levels of integration, more kilowatts of power consumed per square foot in the data center, and liquid cooling provides the most effective way to get heat out of higher power density compute racks,” said Hammond. “The data center culture, in general, tends to be risk-averse, everyone ‘knows’ water and electronics don’t mix. However, for two years now, we’ve been demonstrating that this can be done very cost effectively, reliably, efficiently and safely.”