Built using commodity components, NGF can be easily upgraded as needed
The U.S. Department of Energy’s National Energy Research Scientific Computing Center has a straightforward approach to data: When any of the center’s 4,500 users need access to their data, NERSC needs to be able to deliver. It’s an approach that’s worked well for 39 years and helps NERSC’s users annually publish more than 1,500 scientific papers.
While the idea sounds simple, the technologies that make it possible involve complex systems and years of staff expertise in seamlessly stitching together myriad computing resources. To remain ahead of the data-growth curve, NERSC staff can quickly and easily scale up systems to handle the influx of data. Not only is the data being generated through simulations run on NERSC’s Hopper and Edison supercomputers and various clusters, but the center also has become a net importer of data from large-scale experiments and other HPC centers.
The success of NERSC’s systematic approach was clearly illustrated by the discovery of the last neutrino mixing angle — one of Science magazine’s top 10 breakthroughs of the year 2012 — which was announced in March 2012, just a few months after the Daya Bay Neutrino Experiment’s first detectors went online in southeast China. Collaborating scientists from China, the United States, the Czech Republic and Russia were thrilled that their experiment was producing more data than expected, and that a positive result was available so quickly. (See “Global Filesystem Delivers High-bandwidth, High-volume, Low-response-time” for a detailed description.)
Read More: BIG DATA INSIGHTS: How to Accelerate Discovery in Medicine, Research, Government, Business & More
But that result might not have been available so quickly without the NERSC Global Filesystem (NGF), which is designed to make data more accessible to the center’s users regardless of which computing system they are using. Built using commodity components, NGF can be easily upgraded as needed. In contrast, other centers often use custom systems that may need to be replaced entirely every five years to keep up with demand. The NGF infrastructure allows staff at NERSC to rapidly scale up disk and node resources to accommodate large influxes of data.
NGF, which NERSC started developing in the early 2000s and deployed in 2006, was designed to make computational science more productive, especially for data-intensive projects like Daya Bay. NGF provides shared access to large-capacity data storage for researchers working on the same project, and it enables access to this data from any NERSC computing system. The end result is that scientists don’t waste time moving large data sets back and forth from one system to another — which used to be the case when each computer had its own file system. NERSC was one of the first supercomputer centers to provide a center-wide file system.
According to Jason Hick, leader of NERSC’s Storage Systems Group, the Daya Bay team moved to NGF from custom storage to handle the surprising surge of experimental data and to benefit from the center-wide access to NGF to use additional NERSC computing resources for analysis of their data.
“NGF provides storage to the users wherever they want to compute, analyze or visualize their data,” Hick said. “We’ve designed the file system from soup to nuts and are now maintaining it for them.”
That maintenance includes regular, incremental improvement of the system to add both capacity and capability. “The idea is to continually evolve the storage systems we have, so we don’t present users with entirely new systems. Instead, we provide constant stewardship and expansion of the system,” Hick said.
Archival storage at 40PB and growing
As the primary scientific computing facility for DOE’s Office of Science, NERSC supports about 700 projects for its users. To support them, NERSC maintains a High Performance Storage System to provide both archival and backup data support. As of January 2013, the system held more than 140 million files containing 42 petabytes of data. Another petabyte is added every month.
“For the last four years, NERSC has been a net importer of data,” says NERSC Division Director Sudip Dosanjh. “About a petabyte of data is typically transferred to NERSC every month for storage, analysis and sharing, and monthly I/O for the entire center is in the two- to three-petabyte range. In the future, we hope to acquire more resources and personnel so that science teams with the nation’s largest data-intensive challenges can rely on NERSC to the same degree they already do for modeling and simulation, and the entire scientific community can easily access, search and collaborate on the data stored at NERSC.”
In addition to the data from Daya Bay, NERSC stores data from some of the largest experimental devices in the world, including the Large Hadron Collider in Europe, the Planck satellite mission, and DOE’s Joint Genome Institute. NERSC’s archive also stores data from climate modeling, protein folding and a diverse set of energy and science simulations performed at NERSC.
“We provide some of the largest open computing and storage systems available to the global scientific community,” said Hick. “At any given moment, there are about 35 people logged into the archive system, including users from as far away as Europe or Asia, to researchers at various universities across the United States. Our active archive system allows us to support the high read rates that our users demand while retaining data efficiently, reliably and cost effectively.”
But there are a lot of pieces involved. When NERSC replaced its existing tape infrastructure with newer versions of tape in its active archive in 2010, the staff migrated 40,489 tape cartridges, ranging in age from two to 12 years. During this massive migration, NERSC tracked its tape data reliability within its active archive, and found that: 99.9991 percent of tapes were 100 percent readable, representing a 0.00009 percent error rate.
That reliability backs up Hick’s belief that, when data is stored in an archive, it should be stored indefinitely. At NERSC, the oldest file dates to February 1979, though Hick notes it could be even older. And, once or twice a year, a former NERSC user will contact the center staff to see if they can find old data files. “We’ll search for it to help them find it, and often help them figure out how to read it if it’s in a really old format,” Hick said.
Jon Bashor is a communications manager at Lawrence Berkeley National Laboratory. He may be reached at editor@ScientificComputing.com.