The IEEE Rebooting Computing Initiative aims to be a catalyst for reinventing computing to continue dramatic enhancements in system capabilities. In the past, Moore’s Law of CMOS circuits applied and reflected sufficient performance improvement for sustained growth of the industry and benefits to the consumer. Looking to the future, hardware and software at all levels need to be reexamined, including consideration of novel materials, devices, architectures and systems. One of the central pillars of any future computing technology is energy efficiency, to improve the tradeoff between power usage and performance. On the level of mobile and distributed devices, energy efficiency is needed to minimize battery usage. By contrast, the power level in data centers and supercomputers is much larger (MW instead of mW), but its energy efficiency is equally important. While a variety of alternative technologies are being explored, superconducting computing can provide a very viable alternative for large-scale computing, despite the need for cryogenic cooling to temperatures near absolute zero.
Our modern computing technology consumes several percent of the total electrical power, and that percentage continues to rise. Most of this is associated with massive computer server facilities that support the Internet and the Cloud. To put that into perspective, the collective carbon footprint of data centers is expected to exceed that of the air transportation system by about 2020. This trend cannot continue indefinitely into the future.
Superconductors are well-known to conduct large electrical currents with zero resistance. This enables, for example, large electromagnets that support magnetic resonance imaging (MRI) for medical applications, and particle accelerators for physics research, while minimizing electrical power and electrical noise. But microelectronic applications of superconductors depend on a different phenomenon, using a superconducting device called the Josephson junction to shuttle small packets of magnetic flux around a circuit. Each single-flux-quantum (SFQ) packet represents a fast voltage pulse with height ~ 1 mV and width ~ 2 ps, corresponding to a flux F0 = h/2e = 2 x 10-15 Wb = 2 mV-ps. These Josephson junctions constitute switches that are the basic element of superconducting circuits in the same way that transistors are the basic element of silicon circuits.
The key advantages of SFQ circuits for computing are:
- extremely low switching power, ~ 2 aJ/bit.
- extremely fast clock speeds, up to ~ 100 GHz or more.
- lossless signal propagation at the speed of light between circuit elements
Taken together, these advantages promise orders of magnitude better performance than conventional silicon-based computers.
The obvious disadvantage is the required cooling. While there are superconducting materials that can operate at temperatures up to about 100 K (the so-called “high-temperature superconductors”), most superconducting integrated circuits are based on niobium, with a superconducting critical temperature of only 9 K, and a typical operating temperature of about 4 K (-269 C). The reason is that the manufacturing control and uniformity of niobium circuits is much better than that of other materials. Large-scale integrated circuits with up to 100,000 Josephson junctions have been fabricated on a single chip, with scaling to smaller devices in progress.
Laboratory experiments are often carried out using a boiling bath of liquid helium (T = 4 K), but practical systems require a mechanical refrigerator known as a cryocooler. A variety of reliable commercial cryocoolers for 4 K have already been developed, for MRI systems in hospitals and superconducting magnets for high-energy accelerators. There is a fundamental thermodynamic limit on the efficiency of these cryocoolers, known as the Carnot limit. The theoretical minimum power to transfer 1 W of heat from a Tcold to Thot is (Thot-Tcold)/Tcold, or about 75 W for Tcold = 4 K and Thot = 300 K. The most efficient commercial cryocoolers actually require 200 – 300 W of input power per W of cooling, leaving considerable room for improvement.
Fortunately, the power dissipated in a superconducting computer is so small that it remains 100 times more efficient that a comparable silicon computer, even after taking into account the present inefficient cryocooler. Results from a recent feasibility study are shown in Figure 1. This shows the power versus performance for existing silicon supercomputers, as compared to projected performance of SFQ-based computers. The computer industry is moving toward exascale performance (1000 petaFLOPS), but current technology is scaling toward 200 MW to achieve this performance, which is unacceptably large. The cost of electricity is such that 1 MW costs about $1M per year, and $200M/year just for the electric bill is economically not viable. The U.S. Department of Energy has identified an acceptable target power level of 20 MW, but that will be difficult based on modifications of conventional technology. In contrast, a superconducting supercomputer (comprising a large number of 50-100 GHz superconducting processors operating in parallel) projects 2 MW including the cryocooler power, a factor of one hundred smaller than that projected for conventional technology, and well within the desired power budget.
Unfortunately, complete superconducting processors of this type do not yet exist, and the present maturity level of superconducting technology is not quite up to the task. With this in mind, the U.S. Intelligence Advanced Research Projects Activity (IARPA) recently initiated a Cryogenic Computing Complexity program (C3) to promote the development of superconducting microprocessors and compatible cryogenic memories in the US electronics industry. Key contractors include IBM, Northrop Grumman, Raytheon BBN, with other participants (universities, small companies) working as team members of the prime contractors. The goal is to develop the industrial infrastructure for superconducting processors and cryogenic memories in the next five years, so that commercial systems may be available in the next decade for integration into data centers and supercomputers.
There are other quite different models of superconducting computing. Neuromorphic computing inspired by the structure and linkages of biological neurons has recently received attention, as an alternative to the traditional von-Neumann computer architecture. Superconducting SFQ circuits with their pulse-based logic represent a natural implementation of neuromorphic circuits, and are more compact, much faster, and lower power than semiconductor implementations. Yet another computing approach is based on reducing power at the device level. SFQ circuits are very low in power, but they are still well above the thermodynamic limit ~ kT per switching step. However, novel variants of SFQ circuits have been developed that enable adiabatic and reversible computing, whereby the dissipation is down to kT or even less. While this may reduce the clock speed somewhat, these reversible superconducting circuits may offer the ultimate minimum in power dissipation for those applications where such power reduction is critical.
Finally, recent research has been exploring “quantum computing,” a truly radical departure from classical computing, based on the remarkable properties of quantum systems. (Note that SFQ circuits described earlier are not examples of quantum computing.) These properties, such as quantum superposition and quantum entanglement, are normally evident only on the atomic level, but may be found also in certain microcircuits cooled down to ultra-low temperatures of order 0.1 K or less. These systems may be modeled in terms of quantum bits or “qubits,” rather than classical bits. Superconducting circuits, with their extremely low power dissipation, have been identified as leading candidates for designing a quantum computer. While a general-purpose superconducting quantum computer is viewed as far off, a special purpose analog superconducting quantum computer is already being marketed commercially by D-Wave Systems of Canada. The D-Wave computer (cooled to 0.02 K using a helium dilution refrigerator) may enable rapid solution of certain difficult optimization and simulation problems.
In the past, the evolution of computer technology was largely driven by industrial advances in a single technology, namely very large scale CMOS silicon integrated circuits. That unified approach led to advances on all levels, from the smallest mobile phones to the largest supercomputers. But with the ending of Moore’s Law, this unified approach will inevitably split, leading to a variety of different device technologies, architectures and interface approaches specialized for several distinct applications. It is within such a mixed technology environment that we anticipate a major role for superconducting computing. Clearly, the cryogenic packaging makes it unlikely that superconducting circuits will be used for small mobile devices. On the other hand, large cryogenic systems are efficient and reliable as proven in other applications, such as high-energy-physics accelerators, and will be equally efficient and adaptable to large-scale supercomputers and data centers. The power advantage is indeed remarkably compelling. While such deployment requires widespread adoption of this technology in industry, we believe that the technology is ready and the economic benefit already proven. Research and development in the next decade will determine whether superconducting electronics or some other alternative will be a part of the family of new enabling technologies for Rebooting Computing later in the century.
Elie Track is CEO of nVizix, IEEE Fellow, and co-Chair of the IEEE Rebooting Computing initiative. He also continues his lifelong work in superconducting electronics and its applications in communications and computation, focusing on high efficiency implementation of HPC. Alan Kadin is a Technical Consultant based in Princeton Junction, NJ, and participant of the IEEE Rebooting Computing Initiative. Earlier, he was with Hypres and was on the faculty in Electrical and Computer Engineering at the University of Rochester.