Storage fastest-growing segment in IDC market forecast
The HPC market is entering a kind of perfect storm. For years, HPC architectures have tilted farther and farther away from optimal balance between processor speed, memory access and I/O speed. As successive generations of HPC systems have upped peak processor performance without corresponding advances in per-core memory capacity and speed, the systems have become increasingly compute centric, and the well-known “memory wall” has gotten worse.
Now comes the HPC Big Data era that will require superb memory and I/O capabilities, sometimes with far less need for computing prowess. Data-intensive workloads are emerging relatively rapidly. Partly as a result of this, storage is the fastest-growing segment in IDC’s five-year HPC market forecast. IDC predicts that the storage segment will grow at a robust 8.9 percent CAGR, from $3.7 billion in 2011 to $5.6 billion in 2016. That amounts to a 51 percent revenue jump in five years.
Emerging data-intensive problems are exposing more limitations of established HPC architectural designs — not just in the memory wall itself, but also in the way existing, compute-centric architectures handle data movement throughout the system. It’s important to make advances here, or data movement for emerging high-performance data analysis problems could become frustratingly slow and expensive. As AMD Chief Product Architect John Gustafson has noted, it may take up to 100 picojoules of energy to move the results of a calculation that required only 1 picojoule to compute.
High-performance data analysis (HPDA) is the term IDC coined to describe the convergence of the established data-intensive HPC market and the high-end commercial analytics market that is starting to move up to HPC resources. Simulation-driven HPC is the longest-standing part of the HPDA market. Since the start of the supercomputer era in the 1960s, important HPC workloads, such as cryptography and weather and climate research, have been data intensive. Large, heterogeneous data volumes can also accumulate from the results of iterative modeling/simulation methods, such as parametric modeling for design engineering, stochastic modeling in the financial services industry, and ensemble modeling in weather and climate research.
The newer kid on the HPDA block is analytics, which comes in many flavors. Of course, the financial services industry has been running analytics on HPC systems at least since the late 1980s. But newer methods, from MapReduce/Hadoop to graph analytics, have greatly expanded the opportunities for HPC-based analytics. A good commercial example is PayPal, a multi-billion-dollar eBay company, which not long ago integrated HPC servers and storage into its datacenter workflow to perform sophisticated fraud detection on eBay and Skype transactions in real time. Real-time detection can catch fraud before it hits credit cards. IDC estimates that PayPal has saved more than $710 million. Another commercial adopter is GEICO, which is using HPC to perform weekly updates of insurance quotes for every eligible U.S. household and individual.
The common denominator underlying simulation- and analytics-based HPDA workloads is a degree of algorithmic complexity that is atypical for transaction processing-based business computing. With the help of sophisticated algorithms, HPC resources are already enabling established HPC users, as well as commercial adopters such as PayPal, to move beyond “needle in a haystack” searches in order to discover high-value, dynamic patterns. IDC believes that HPC resources will be increasingly crucial for extending “Big Data” capabilities from searches for discrete, known items to the discovery of unknown, unexpected, high-value patterns and relationships.
IDC forecasts that revenue for HPC servers acquired primarily for HPDA use will grow robustly, increasing from $673 million in 2011 to about $1.2 billion in 2016. Revenue for the whole HPDA ecosystem, including servers, storage and interconnects, software, and service should double the server figure alone.
IDC believes that the need to employ complex algorithms to extract value from data, in real time or near-real time, is bound to drive rapid growth in HPDA. The HPDA market has been growing robustly from a modest base, driven in part by:
• The data explosion arising from larger, more complex simulations and from larger, higher-resolution scientific instruments and sensor networks (e.g., the Large Hadron Collider, the Square Kilometer
Array telescope)
• Data-intensive homeland security applications such as cyber security, insider threats, and anti-terrorism
• Time-critical fraud/error detection in large government programs (e.g., national health care systems), and in industrial and commercial organizations
• A growing array of data-intensive life sciences applications, including drug discovery and testing, research on outcomes-based medicine, personalized medicine, systems biology and more
• The need for a growing number of commercial companies to adopt HPC resources to address algorithmically complex problems in real time or near-real time.
Most HPDA work will happen on standards-based clusters located on-premise or in the cloud. Even graph analytics applications are being run successfully in public cloud environments, as long as the solutions aren’t needed in real time. But standard clusters are not winning all procurements. For some of the most challenging, unpartitionable jobs, especially real-time graph analytics, buyers need a more powerful solution. IDC expects these workloads to be run increasingly on HPC systems with large, global memories and turbocharged I/O capabilities.
Economically important use cases are starting to become apparent (e.g., fraud detection, high-frequency trading, personalized medicine, customer acquisition/retention). In the formative HPDA market, it will take time to know which of these use cases will resolve into markets worth pursuing. Plenty of new opportunities already exist, however, for HPC vendors that are prepared to address the converging HPDA simulation-analytics market, including commercial firms adopting HPC for the first time.
Some points to consider:
• Storage pain points are escalating. The issues users complain to IDC most often about on the storage side are access density and metadata management. The ability to maintain satisfactory response times as many more users access storage infrastructures is a major challenge today, as is keeping track of all data locations as data volumes and storage resources skyrocket in size. It is not uncommon for HPC/HPDA organizations to have storage capacities in the 5 to 25 PB range that are doubling every two to three years. NCSA claims that their 300 PB “Blue Waters” storage environment is the largest in the world. But many small and medium-size organizations face similar storage issues. IDC recently completed and will soon publish a detailed HPDA storage forecast by application and industry segment, to complement our HPDA server forecast.
• HPC resources are front and center in many HPDA environments. Much of the history of data warehousing/data mining has been characterized by the three-tier client-server architecture, in which the analytical server worked on copied data off to the side, isolated from the live, real-time workflow. In the new HPDA era, the HPC server and storage resources are more often inserted directly into the workflow to enable analysis of live, streaming data in real or near-real time. Before deploying HPC, PayPal was performing fraud detection in batch mode — too slow to catch fraud before it hit consumers’ credit cards. HPC servers, storage and software made real-time detection possible.
• Useful tools are largely lacking for very large data sets. Tools such as Hadoop and MapReduce can effectively expedite searches through the large data sets that characterize some of the newer HPDA problems. Users tell IDC that these tools can be great for retrieving and moving through complex data, but they do not allow researchers to take the next step and pose intelligent questions. In addition, the going gets tough when data sets cross the 100 TB threshold. Sophisticated tools for data integration and analysis on this scale are largely lacking today. This creates opportunities for vendors to provide more powerful, scalable tools.
• Clouds can be useful for some HPDA problems. As implemented today, public clouds are useful primarily for embarrassingly parallel HPC jobs and are much less effective on jobs requiring major interprocessor communication via MPI or other protocols. Hence, highly parallel HPDA problems can be attractive solutions for public clouds so long as users don’t mind the initial upload time and don’t need to move the data often. HPDA cloud use is expanding to include even less-partitionable problems such as graph analytics, as long as the problems do not have to be solved in real time. Commercial firms, including small startups, are at the forefront of a trend toward taking HPDA problems directly to public clouds and avoiding the capital expense needed to build on premise data centers.
IDC research shows that the HPC community plans to deploy two main strategies for addressing the data movement challenges. The first is to reduce data movement at all levels, via in-memory processing, in-database processing and related approaches. The second approach is to accelerate data movement via more capable fabrics and interconnect networks. Think here not only of vendors such as Mellanox and Cisco, but also of initiatives by processor vendors to move up a level of integration in order to provide capable fabrics. AMD’s SeaMicro initiative comes to mind, along with Intel’s acquisitions of QLogic’s InfiniBand assets and Cray’s proprietary interconnect assets. Presumably, the benefits of these initiatives will include, but not be limited to, improved core-core communications. Users praise ScaleMP’s solution for retrofitting clusters to enable SMP-like capabilities at modest scale, which in some cases is all that’s needed. Those requiring more SMP muscle and global memory for HPDA workloads can turn to SGI’s UV series, and the massive multithreading capability of YarcData’s uRIKA appliance also has been winning customers with extreme graph analytics challenges.
Storage and data movement can no longer remain secondary considerations. The goal of using available budgets to maximize peak and LINPACK flops (machoflops) will need to give way over time to a more singular focus on user requirements for sustained performance and time-to-solution. Already, one leading site, NCSA, has declined to submit high-performance Linpack numbers for the Top500 rankings. The Top500 list will remain valuable for census-tracking large systems and trends affecting them over time, but the shift away from strong compute centrism will make it even more important to develop more balanced benchmarks for HPC buyers/users.
Steve Conway is Research VP, HPC at IDC. He may be reached at [email protected].