The San Diego Supercomputer Center (SDSC) at the University of California San Diego and the Simons Foundation’s Flatiron Institute in New York have reached an agreement under which the majority of SDSC’s data-intensive Gordon supercomputer will be used by Simons for ongoing research following completion of the system’s tenure as a National Science Foundation (NSF) resource on March 31.
Under the agreement, SDSC will provide high-performance computing (HPC) resources and services on Gordon for the Flatiron Institute to conduct computationally-based research in astrophysics, biology, condensed matter physics, materials science, and other domains. The two-year agreement, with an option to renew for a third year, takes effect April 1, 2017.
Under the agreement, the Flatiron Institute will have annual access to at least 90 percent of Gordon’s system capacity. SDSC will retain the rest for use by other organizations including UC San Diego’s Center for Astrophysics & Space Sciences (CASS), as well as SDSC’s OpenTopography project and various projects within the Center for Applied Internet Data Analysis (CAIDA), which is based at SDSC.
“We are delighted that the Simons Foundation has given Gordon a new lease on life after five years of service as a highly sought after XSEDE resource,” said SDSC Director Michael Norman, who also served as the principal investigator for Gordon. “We welcome the Foundation as a new partner and consider this to be a solid testimony regarding Gordon’s data-intensive capabilities and its myriad contributions to advancing scientific discovery.”
“We are excited to have a big boost to the processing capacity for our researchers and to work with the strong team from San Diego,” said Ian Fisk, co-director of the Scientific Computing Core (SCC), which is part of the Flatiron Institute.
David Spergel, director of the Flatiron Institute’s Center for Computational Astrophysics (CCA) said, “CCA researchers will use Gordon both for simulating the evolution and growth of galaxies, as well as for the analysis of large astronomical data sets. Gordon offers us a powerful platform for attacking these challenging computational problems.”
Simons Array and Simons Observatory
The POLARBEAR project and successor project called The Simons Array, led by UC Berkeley and funded first by the Simons Foundation and then in 2015 by the NSF under a five-year, $5 million grant, will continue to use Gordon as a key resource.
“POLARBEAR and The Simons Array, which will deploy the most powerful CMB (Cosmic Microwave Background) radiation telescope and detector system ever made, are two NSF supported astronomical telescopes that observe CMB, in essence the leftover ‘heat’ from the Big Bang in the form of microwave radiation,” said Brian Keating, a professor of physics at UC San Diego’s Center for Astrophysics & Space Sciences and a co-PI for the POLARBEAR/Simons Array project.
“The POLARBEAR experiment alone collects nearly one gigabyte of data every day that must be analyzed in real time,” added Keating. “This is an intensive process that requires dozens of sophisticated tests to assure the quality of the data. Only by leveraging resources such as Gordon are we be able to continue our legacy of success.”
Gordon also will be used in conjunction with the Simons Observatory, a 5-year $40 million project awarded by the Foundation in May 2016 to a consortium of universities led by UC San Diego, UC Berkeley, Princeton University, and the University of Pennsylvania. In the Simons Observatory, new telescopes will join the existing POLARBEAR/Simons Array and Atacama Cosmology Telescopes to produce an order of magnitude more data than the current POLARBEAR experiment. An all-hands meeting for the new project will take place at SDSC this summer. A video describing the project can be viewed by clicking the image below.
Delivering the Data
The result of a five-year, $20 million NSF grant awarded in late 2009, Gordon entered production in early 2012 as one of the 50 fastest supercomputers in the world, and the first one to use massive amounts of flash-based memory. That made it many times faster than conventional HPC systems, while having enough bandwidth to help researchers sift through tremendous amounts of data. Gordon also has been a key resource within NSF’s XSEDE (Extreme Science and Engineering Discovery Environment) project. The system will officially end its NSF duties on March 31 following two extensions from the agency.
By the end of February 2017, Gordon had supported research and education by more than 2,000 command-line users and over 7,000 gateway users, primarily through resource allocations from XSEDE. One of Gordon’s most data-intensive tasks was to rapidly process raw data from almost one billion particle collisions as part of a project to help define the future research agenda for the Large Hadron Collider (LHC). Gordon provided auxiliary computing capacity by processing massive data sets generated by one of the LHC’s two large general-purpose particle detectors used to find the elusive Higgs particle. The around-the-clock data processing run on Gordon was completed in about four weeks’ time, making the data available for analysis several months ahead of schedule.