Research & Development World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • Call for Nominations: The 2025 R&D 100 Awards
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
    • Explore the 2024 R&D 100 award winners and finalists
  • Resources
    • Research Reports
    • Digital Issues
    • R&D Index
    • Subscribe
    • Video
    • Webinars
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE

The Changing Face of HPC

By R&D Editors | September 30, 2005

The Changing Face of HPC

Remarkable growth in line with Moore’s Law

High performance computing (HPC) has been around for as long as the computer. While HPC has been a part of government and research institutes — such as NASA, the Department of Energy, and the Department of Defense — for several years, only recently has HPC has become a central element to commercial and industrial competitiveness. According to a 2005 report released by the Council on Competitiveness, a non-profit public policy organization comprised of CEOs, university presidents, heads of labor groups and market research firm IDC, HPC has become a key component of U.S. businesses. A whopping 97 percent of companies responding to the report survey say that HPC is an integral part of their ability to compete and be successful, and 47 percent of respondents say they could not survive without HPC.

There are three key factors that have ignited its market growth and evolving architecture:
• the remarkable increase in inexpensive cluster computing
• the development of parallel applications using the cluster’s immense power
• high performance storage enabling clusters to be much more pervasive.

The nexus of these three developments more than any other is responsible for making HPC such an integral part of the way companies do business today.

Cluster computing to the rescue

Since its inception, large computing machines with very fast single CPUs have dominated HPC and, over time, these single CPUs have become even faster. Seymour Cray

 

 The Panasas ActiveScale Storage Cluster stores up to 5 TB per shelf and 55 TB per rack.  

has been credited for taking the performance of the single CPU to tremendous heights, initially with his Cray line of machines. Later he developed Symmetric Multi-processor (SMP) machines that used several CPUs, but also allowed all of the CPUs to see all of the memory. This occurred in the early 1980s, just about the time when the personal computing market was taking off with the IBM PC. The performance difference between the Cray CPUs and the Intel processors used in home PCs was enormous.

In the mid- to late 1980s the workstation processor closed the performance gap with large HPC-oriented CPUs. At the same time, the performance of the PC CPU started a remarkable climb in speed. The PC CPU has an advantage over the HPC CPU, in that it is commodity-based. This spreads the cost of processor development over tens of millions of processors rather than the few thousand processors of the HPC world. Given the rapid growth rate witnessed in PC CPU performance, Gordon Moore, then CEO of Intel, coined his famous “Moore’s Law.” He predicted that the established trend of doubling the number of transistors per square inch would continue into the foreseeable future. In subsequent years, the pace slowed a bit, but data density has doubled approximately every 18 months just as he observed decades ago.

Moore’s Law also pointed out that PC processor speed would quickly close the gap with HPC CPUs. More importantly, the price of these commodity processors would be several orders of magnitude less than that of HPC CPUs. However, the world was still hitting a brick wall in terms of performance of a single CPU or a small number of CPUs in an SMP configuration. Nevertheless, thanks in a large part to ARPANET, the forerunner to the Internet, computers were being networked together in the mid-’80s. As the commodity PC market grew, so too did the commodity network market that enabled networks to become faster and cheaper to run.

In early 1993, then-NASA employees Don Becker and Tom Sterling created a small cluster called Beowulf that had a price and performance advantage several orders of magnitude more over the large HPC machines of that day. The duo used commodity Intel 486 DX4/100 processors with a 10 MB per second Ethernet network, combined with the Linux operating system, to create the first commodity-based cluster. The intersection of these three elements — commodity processors, inexpensive and high performance networking and the open-source Linux operating system — allowed the explosion of clusters and promised a new level of price/performance benefits of HPC.

Today’s clusters have thousands of commodity processors. These clusters dominate the Top500 list, which ranks the 500 fastest computer systems in the world. At the same time, people build clusters with ease at work and at universities to tackle complex problems that, cannot be solved in a cost-effective manner without clusters.

Parallel applications to rule them all

Clusters provide the computing horsepower. However, without applications, they remain only a pile of blinking lights. The development of standard or quasi-standard message passing libraries, such as Message Passing Interface (MPI) and Parallel Virtual Machine (PVM), enable users to write applications that harness the distributed processing capability of clusters for a single job. This means complex problems can now not only be easily solved, but solved faster as well.

With MPI and PVM libraries, users can write applications that run on each CPU of the machine and enable data sharing among them. The secret is designing the application so that each CPU solves part of the problem as independently as possible from the other CPUs. As each CPU can work on a part of the problem without the need to communicate with other CPUs, the parallel performance can be estimated by dividing the time it takes to run the same problem on a single CPU by the number of CPUs being used. In other words, if it takes a single CPU a certain amount of time to solve the problem, then two CPUs should solve the problem twice as fast; four CPUs four times as fast, and so on. This measure of the actual performance of the application as the number of CPUs is increased is known as scalability.

Most of the early parallel applications were conducted in universities, government labs and research institutions. These applications had good scalability and showed others how to write good parallel applications using PVM and MPI. Then, researchers and early cluster adopters in business organizations began rewriting their internal applications for clusters using initial code as examples. What they discovered was that the increased performance of clusters allowed them to spend more time on design and analysis and less time on routine processing tasks. More importantly, because of the huge computational resource that clusters provided at a comparatively low price, users also became more productive and could afford more time on innovation.

Shortly after, independent software vendors (ISVs) that develop and sell commercial software joined the user community of clusters, realizing the cluster market provided them with new opportunities and improved revenue streams. Today, ISV applications are available in nearly every field, ranging from oil and gas exploration to bioinformatics to digital rendering) that reap the powerful advantages of clustered computing.

High performance storage

Clusters are extremely good for applications that need very high computational power or increased memory. However, many applications that rely on clusters still require a large amount of I/O throughput. These applications generate huge amounts of data while running, and need to efficiently and effectively write data to a storage medium, usually disk drives.

The first clusters relied on Network File Systems (NFS) for their I/O, but had limited performance. As clusters became more popular, the need for high performance storage that eliminated throughput bottlenecks also climbed. While a number of companies offer high-performance storage for clusters, their approaches to achieving large I/O rates varies. Some companies such as Panasas in Fremont, CA, use low-cost commodity parts to combine an object-based technology with a parallel file system, thus achieving high I/O throughput rates demanded by HPC applications, while making the storage system as reliable and manageable as possible. Coupling high performance clusters with high performance storage systems allows for a wide range of computing tasks to be addressed in a cost-effective way. In fact, several storage companies have followed Panasas in deploying parallel approaches to achieve high I/O rates.

Summary — three rings, one tent

High performance computing has become exciting not because it is fascinating (and it is), rather, it is now a tool that most companies use every day in developing new products, solving problems and innovating. Note, however, HPC has not gotten to this stage of development using the old traditional approach of larger, very expensive machines with a few CPUs. Instead, HPC has exploded because of the convergence of inexpensive cluster computing, parallel applications that harness clusters, and high performance storage. Now we have true supercomputers, being built by people in universities, companies, government labs and non-profit organizations. HPC has become pervasive in our economy and will continue to drive the competitive edge in businesses, as well as to improve the quality of life by solving problems relevant to today’s world.

Larry Jones is Vice President of Product Marketing at Panasas. He may be contacted at [email protected].

Related Articles Read More >

Berkeley Lab’s Dell and NVIDIA-powered ‘Doudna’ supercomputer to enable real-time data access for 11,000 researchers
QED-C outlines road map for merging quantum and AI
Quantum computing hardware advance slashes superinductor capacitance >60%, cutting substrate loss
Hold your exaflops! Why comparing AI clusters to supercomputers is bananas
rd newsletter
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, trends, and strategies in Research & Development.
RD 25 Power Index

R&D World Digital Issues

Fall 2024 issue

Browse the most current issue of R&D World and back issues in an easy to use high quality format. Clip, share and download with the leading R&D magazine today.

Research & Development World
  • Subscribe to R&D World Magazine
  • Enews Sign Up
  • Contact Us
  • About Us
  • Drug Discovery & Development
  • Pharmaceutical Processing
  • Global Funding Forecast

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search R&D World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • Call for Nominations: The 2025 R&D 100 Awards
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
    • Explore the 2024 R&D 100 award winners and finalists
  • Resources
    • Research Reports
    • Digital Issues
    • R&D Index
    • Subscribe
    • Video
    • Webinars
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE