Sensing the Future of Greener Data Centers
High-performance computing facilities turn to environmental sensors to improve energy efficiency
![]() |
A 2007 Congressional report estimated that, in 2006 alone, data centers consumed about 61 billion kilowatt-hours (kWh) of electricity combined. That’s 1.5 percent of total U.S. electricity consumption for a tab of $4.5 billion and growing. The problem is worrying enough that the U.S. Department of Energy recently awarded $47 million in grants for data center efficiency research.
Cost and carbon footprint are critical concerns, but they aren’t the only issues at play, says Kathy Yelick, associate laboratory director for the computing sciences directorate at Lawrence Berkeley National Laboratory (Berkeley Lab). Performance is suffering, too. “The heat, even at the chip level, is limiting processor performance,” says Yelick. “So, even from the innards of a computer system, we are worried about energy efficiency and getting the most computing with the least energy.”
Since that report, vendors and data centers have worked to improve energy efficiency by running data centers hotter, changing their layouts, developing lower-power computers and more efficient power supplies, and cooling. “What we haven’t done very well is to use computers to do better at sensing, monitoring and controlling the data center environment,” says Jonathan G. Koomey, author of the Congressional report and a consulting professor at Stanford University.
High-performance computing researchers are tackling those issues at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) located at Berkeley Lab. NERSC is piloting a system of wireless sensors and software from SynapSense of Folsom, CA. The center is using this system to help staff make efficiency improvements and alert operators to environmental issues before they become critical.
![]() |
At the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), researchers use sensors to monitor the environmental conditions in the data center in an effort to reduce energy use. Courtesy of NERSC |
In the NERSC data center, 785 sense points record air temperature, pressure and humidity every five minutes. Attached inside, above and sometimes below racks, these sensors feed data to some 100 relay stations, which, in turn, radio the data to the operations center. This data is collected by NERSC’s SynapSense system software, which generates heat, pressure and humidity maps.
“Just a glance at one of these maps tells you a lot,” says Jim Craw, group lead for computational systems at NERSC. “If you start developing hot or cold spots, you’re getting an unambiguous visual queue that you need to investigate and/or take corrective action.”
Once all 1,029 sense points are deployed, the system also will be able to calculate the center’s power usage effectiveness, an important metric that compares the total energy delivered to a data center with that consumed by its computational equipment.
Although NERSC’s environmental sensing and monitoring system is still in testing, the center already has seen some results.
Making use of monitoring
“After replacing some older machines, we used this monitoring system to make sure that the entire facility functioned well from a cooling standpoint,” says Yelick, who is also NERSC director. The SynapSense system alerted staff about cold air pockets, allowing them to adjust air handling units and chillers to compensate.
The monitoring system also revealed changes in data center airflow following a power upgrade. Changing some floor tiles and partitions corrected the airflow and rebalanced air temperatures.
![]() |
The NERSC system generates heat maps that can be used to look for areas where efficiency can be improved. Courtesy of NERSC |
As Yelick notes, data centers could just keep all areas colder than they need to be, but for a center that strives to make the most efficient use of all its resources, that kind of thinking is wasteful. “Another option is to run a little closer to the maximum temperature the machines can safely tolerate,” says Yelick. Running at those higher temperatures requires more careful monitoring of airflow and temperature.
For example, a NERSC team recently installed an IBM iDataplex system in a manner so efficient that, in some cases, the cluster can cool the air around it. By setting row pitch at five feet, rather than the standard six feet, and reusing the water exiting a Cray XT4, the team reduced cooling costs by half and used a third less floor space than an air-cooled installation. Sense points inside and outside the water-cooled doors have helped the team monitor and tune the system to run at high efficiency — about 105 degrees inside the cabinets, 72 degrees outside.
Because the system is configured to simulate a wide range of data center environments, “we can make changes to air and water temperatures and know within an hour the total effects to our center and whether the change provided a payoff,” says Brent Draney, group lead for NERSC’s networking, security and servers group. “In the future, we hope to determine the optimal system configuration to minimize total energy usage on a per-job basis.”
It’s no surprise that Yelick calls the monitoring system a good investment.
Googling efficiency
Of course, NERSC is not alone. For instance, Google also looks for ways to cut the costs of running its data centers. “Reducing the environmental footprint of our data centers starts with reducing their electricity consumption,” states a Google Web page. The same page points out that the electricity used in a data center eventually turns into heat. In fact, this company’s literature notes: “In many data centers, cooling alone is responsible for a 30 to 70 percent overhead in energy usage.”
According to Google spokesperson Emily Wood, “We have extensive monitoring and data collection for temperature, power and other parameters in our facilities. We use the information to reduce energy use and improve availability, planning and design.” She continues: “The optimizations we do with the data we measure allow us to safely reduce thermal margin and run our facilities at warmer temperatures. This gives us both a reduction in energy use, as well as a more consistent machine operating environment.”
Although Wood would not provide additional details, Google’s online information notes that the company uses evaporation as one technique for cutting cooling costs at its data centers. To do this, Google builds cooling towers at its data centers. These towers take warm water from a data center, cool it through evaporation, and then return cooler water to the data center to keep the computers running at the desired temperature.
Tomorrow’s efficiency tasks
Sensing and monitoring offer data centers lots of raw data, says Koomey. The next steps are to integrate that information, to make sense of it, and to recommend actions (or even to automate changes) that will save energy, he points out.
In addition to the SynapSense sensors, NERSC’s two large Cray systems have their own internal sensors, and sensor readings from two different chiller plants and air handlers come in on their own separate systems as well. NERSC staff wrote an application to bring the Cray and SynapSense data together, and they are working on integrating the others.
As tomorrow’s systems grow far more powerful, researchers will need to be even more vigilant about energy use, looking for every opportunity to reduce the required load. In addition to optimizing the facility’s efficiency, “we are looking for ways of redesigning systems from the ground up to improve efficiency too,” says Yelick. To that end, she predicts, “eventually, all of the major, large facilities will be doing the kind of energy efficiency monitoring that we are doing here.”
Mike May is a freelance writer for science and technology based in Houston, TX. He may be reached at [email protected]