The Information Computing and Technology sector uses a large chunk of the Earth’s energy. The European ParaDIME project has been working to reduce the energy consumption of data centers.
With climate change high on today’s agenda, it’s important to improve energy efficiency in the ICT sector. Researchers have estimated that the electricity used by data centers worldwide already accounted for 1.1-1.5 percent of the world’s total electricity use at the start of this decade — and this figure is likely to have increased since. However, the Parallel Distributed Infrastructure for Minimization of Energy (ParaDIME) project is working to remedy this situation.
The three-year ParaDIME project came to a close at the end of September 2015. It was coordinated by the Barcelona Supercomputing Center (BSC) in Spain, in partnership with IMEC in Belgium, Dresden University of Technology in Germany, the University of Neuchâtel in Switzerland, and Cloud & Heat Technologies GmbH in Germany.
The goal of ParaDIME was to maximize the energy efficiency of data centers, and in doing so reduce electricity use and CO2 emissions. To achieve this, the ParaDIME team used a range of techniques on both the software and hardware found in data centers, from optimization of code to changing building design.
One of the most substantial changes involved installing the data center’s compute nodes in a decentralized manner, and using the heat they produce to heat the entire building. On a smaller scale, the team experimented with creating computer platforms that comprised CPUs, GPUs, and programmable FPGA accelerators, which produced substantial energy savings by running the right code on the right type of hardware.
The voltage supplied to the hardware was also reduced: “This is a well-known technique to reduce power,” says Oscar Palomar, a BSC researcher on ParaDIME. “We have achieved very high savings by pushing this approach beyond the safe limit of voltage: beyond that point, circuits begin to fail and results generated include errors. But with the use of error-correction techniques, we indeed achieved high savings in some processor structures.”
Energy consumption was also effectively reduced by implementing efficient schedulers which balance heating/cooling workloads across different data centers. “We developed custom hardware to turn off nodes while still keeping the attached disks powered, to support big data while being energy efficient,” says Osman Unsal, principal investigator on the ParaDIME project. Energy-saving scheduling was also applied to virtual machines to reduce their migration costs and decrease the time required to reactivate them.
This range of energy-saving measures, while highly varied, was highly effective. The overall consumption of energy by data centers using these measures was reduced by up to 60 percent, with an associated CO2-emission reduction of 50 percent.
In the future, Unsal, Palomar, and their colleagues plan to investigate applying these measures to high-performance computing in an effort to smash down the so-called ‘power wall’ and aid efforts to reach exascale supercomputing at a reasonable energy cost. “We have reached a point at which further improvements in computing performance can only be achieved by improving the energy-efficiency of processing chips,” explains Unsal.
This article was originally published on ScienceNode.org. Read the original article.