In the Scalable Storage issue of HPC Source — an interactive publication devoted exclusively to coverage of high performance computing — we explore “Scalable Storage: Solutions for Big Data and HPC” and share expert viewpoints on topics ranging from future predictions for HPC storage to a global filesystem that can be easily upgraded to delivers high volume and bandwidth with extremely low response time to the advantages of making aggregate bandwidth available to parallel applications.
In “HPC Architectures Begin Long-Term Shift Away from Compute Centrism,” Steve Conway, Research Vice President, HPC at IDC, provides an update on the fastest-growing segment in IDC’s market forecast: storage.
Jon Bashor, communications manager at Lawrence Berkeley National Laboratory (Berkeley Lab), talks about how “At NERSC, Scalable Storage for Big Data Leads to Big Science Breakthroughs.” Built using commodity components, NGF — the National Energy Research Scientific Computing Center’s Global Filesystem — delivers high bandwidth, high volume and low response time, and can be easily upgraded as needed.
As Rob Farber, an independent HPC expert explains in “Big Data Requires Scalable Storage Bandwidth,” the great strength of scalable systems is their ability to make the aggregate bandwidth available so parallel applications can achieve very high performance.
As always, we invite you to pass this information along to colleagues who also may find its contents valuable, and we welcome your suggestions for future issues.