Supercomputers are very large and powerful computers able to perform calculations much more rapidly than general-purpose computers. They achieve this with a massively parallel architecture. Today’s supercomputers use the same off-the-self processors, operating at the same clock speeds, as desktop computers, but they have a large number of them so that many calculations can be performed simultaneously – in parallel. Not all calculations can be accelerated in this way, since some calculations must be performed sequentially with the output of the first calculation feeding into the second calculation and so forth.

Many complex simulations lend themselves well to massively parallel computing. Examples include climate modelling, quantum mechanics, nuclear fusion, astrophysics and molecular modelling. Vectorized programs explicitly perform calculations simultaneously on multiple values, stored in vectors. These types of simulations are often defined in terms of matrix algebra which is naturally suited to parallel processing.

As computing power increases and becomes more affordable, the tasks which are considered to be the domain of supercomputing become practical for desktop computers, while new more demanding applications emerge for supercomputers. For example, in the 1980’s, supercomputers were often used to render 3D images or perform stress analysis for aerospace structures. Today, desktop workstations include multi-core processors and graphics processing units (GPUs) with massively parallel architectures. These machines are capable to rendering 3D images in real-time and performing structural calculations with millions of degrees of freedom in seconds. The domain of supercomputing is now more complex simulations such as computational fluid dynamics (CFD), climate modelling, biology and simulation of the brain.

Historically, supercomputers required special operating systems to enable parallel processing within their massively parallel architectures. For example, the proprietary CAS and UNICOS developed for early Cray supercomputers. Today, Linux is universally used by supercomputers. Because it is free and open operating system, users can strip it down to the bare minimum needed to perform calculations, while researchers around the world can share their code. This has led to its use increasing over the last two decades, to now dominate supercomputing.

Supercomputers are typically optimized to perform floating-point arithmetic – mathematical operations on floating-point numbers. Their speed is, therefore, measured in floating point operations per second (FLOPS) rather than the more common million instructions per second (MIPS). The largest supercomputers now have millions of processor cores and can perform hundreds of PetaFLOPS, that means 10^{17} calculations per second.

## Tell Us What You Think!