Supercomputers typically have a useful life of about five years, as these high-performance systems, many running 24/7, slowly succumb to burn-out – of their nodes, that is – as well as steady advances in processing technologies.
Not so with Trestles, which was acquired more than three years ago by the Arkansas High Performance Computing Center (AHPCC) at the University of Arkansas after entering service at the San Diego Supercomputer Center (SDSC) at UC San Diego in mid-2011 under a $2.8 million National Science Foundation (NSF) grant.
Billed as a “high-productivity workhorse,” Trestles was based on the concept that by tailoring a system for the majority of modest-scale jobs rather than a handful of researchers who run jobs at thousands of core counts, users could achieve higher throughput and increased scientific productivity.
While at SDSC, Trestles users spanned a wide range of domain applications, including astronomy, biophysics, climate science, computational chemistry, materials science, and more. It was also recognized as a leading platform for science gateway applications; for example, the system served more than 650 users per month via the popular CIPRES phylogenetics portal alone.
“It’s terrific that University of Arkansas researchers have been able to use Trestles for several years beyond its decommissioning as a national NSF resource and to extend the scientific impact of NSF’s HPC investments,” said Richard Moore, the principal investigator for the Trestles award and SDSC’s now-retired deputy director.
Trestles continues to deliver on that strategy today, more than three years into its “next life” as a valuable research resource at the U of A. AHPCC’s latest estimates are that during that time, Trestles has provided more than 136 million CPU hours of service, with over 804,000 jobs run among almost 200 active users.
“Trestles came to us at a time where computational needs were peaking in the form of explosive growth and demand in the faculty researcher community,” said AHPCC Director of Strategic Initiatives & User Services Jeff Pummill, who is also a Trestles user in the area of multi-omics, primarily with the U of A’s Biological Sciences and Agricultural departments. “Queue wait times were getting unacceptably long and jobs were stacking up. So the arrival of 8000+ compute cores was a welcome sight for all of us.”
Pummill noted that architecturally, Trestles has been ideal for work in the areas of bioinformatics and genomics, as its software is typically Shared Memory Parallel (SMP), which uses multiple processors on the same computer, as opposed to Distributed Memory Parallel (DMP), which uses multiple processors on either the same or multiple computers. “Trestles’ nodes are configured with 32 compute cores and 64 gigs of memory, which is ideal for smaller bacterial genome work, but useful for many aspects of larger eukaryotic genome work,” he added.
Some examples of research projects using Trestles at the U of A include:
- Materials Engineering: A research team including Salvador Barraza-Lopez, associate professor of physics at the U of A, and Taneshwor Kaloni, a former post-doctoral researcher in Barraza-Lopez's lab, shed light on the behavior of one of ultrathin materials known as tin telluride (SnTe). The study detailing their findings was published in the journal Advanced Materials.
- Neurosciences: Vidit Agrawal, a graduate student in the U of A’s Physics Department has been using Trestles to perform simulations of large neural networks and conduct a statistical analysis on experimental results.
- Supply chain analysis: Agrawal has also used Trestles to investigate the structural fragility of supply networks and explore its relationship with a firm’s equity risk. “AHPCC has been of great help to me as it has cut down my overall computation time from months to days.”
- Microbiome research: Jiangchao Zhao, an assistant professor with the U of A’s Department of Animal Science, used Trestles to identify gut microbiome signatures that when associated with longevity provides a promising modulation target for healthy aging.
Additional research projects can be found here.
Re-use, Not Recycling
While many supercomputers still end up on the scrap heap, the continued operation of Trestles beyond its expected lifespan is just one example of lasting computational power and productivity.
In early 2017, SDSC and the Simons Foundation’s Flatiron Institute in New York reached an agreement under which the majority of SDSC’s data-intensive Gordon supercomputer would be used by Simons for ongoing research following completion of the system’s tenure as a NSF resource on March 31 of that year, following five years of service. While Gordon is now primarily used by the Simons Foundation, the system remains housed in SDSC’s data center.
“It’s very gratifying to see SDSC’s HPC systems continue to serve a wide range of researchers following their NSF tenures,” said SDSC Director Michael Norman. “For us, it’s testimony to designing a robust architecture from the start, which contributes to their useful lives well beyond what’s typical for such systems.”
In early 2018, the NSF extended the use of SDSC’s current petascale system, Comet, for a sixth year of service, into March of 2021. Comet is now one of the most widely used supercomputers in the NSF’s XSEDE program. Under a separate NSF award valued at about $900,000 SDSC recently doubled the number of graphic processing units (GPUs) on Comet in direct response to growing demand for GPU computing among a wide range of research domains.