The history of computing is full of generational battles in which innovators and companies fought to shape the future. In the early era of computing, debates over the ideal architecture of systems with less power than most modern watches established the standards that exist today. From the late 1970s to early 1990s, the recognition of the personal computing market saw early computers like the Radio Shack TRS-80 and Commodore 64 transition into more widespread PC platforms like the Apple Macintosh and Windows-powered PCs.
The end of Moore’s Law is ushering in another such upheaval, though one which is seeing a renewed interest in entirely new approaches to computing. These new approaches include quantum computing and neural-inspired computing, both which have significantly risen in popularity in recent years. Already, the first large-scale neural chips, such as IBM TrueNorth, are being produced with commercial intent, and early indications are that the technology can be entirely compatible with modern fabrication capabilities. Thus, the question is not whether industry can provide neural platforms on systems from mobile phones to supercomputers, but whether their promise will be realized.
While interest in the brain as a source of inspiration goes back to early computing pioneers, it is only now that the prospect of true brain-inspired computing capabilities is really on the horizon. While approaches differ, neural-inspired computing ultimately comes down to looking to the brain for inspiration on several facets of computing: cognitive function (the ‘algorithm’), the brain’s circuitry (the ‘architecture’), and the unique properties of neurons and synapses (the ‘devices’).
Recent advances in each of these domains has made brain-inspired computing more attractive, particularly in light of the growing demands for computing to analyze and interpret vast amounts of data. First, the revolution in machine learning, particularly deep learning methods, has renewed excitement in neural network algorithms and other algorithms inspired by the brain. Second, the potential broad applicability of these machine learning applications has encouraged the development of new architectures that can accelerate these functions, particularly for smartphones and other embedded platforms. Finally, the increased need for low-power devices has encouraged looking to non-silicon based modes of computation and memory storage, some of which have analog behaviors similar to those seen in neurons. While these developments provide an opportunity for the neural computing community, several challenges must be overcome if neural-inspired computing is to truly impact computing.
The challenges facing neural computing
Despite the common perceptions that computers ‘think’ and brains are our bodies’ ‘computers,’ the gap between neuroscience and computer science is daunting. It goes without saying that the brain is a very different system from digital computers. At almost all scales one wants to consider, brains and computers differ significantly. Brains are comprised of neurons filled with salt water and organic compounds such as proteins and fatty acids, whereas computer hardware consists of conductive metals and semi-conductors like silicon. Neurons communicate with sharp pulses of electrochemical activity known as ‘spikes,’ whereas computers represent information electrically in synchronized ‘1s’ and ‘0s.’ Neurons in the brain are organized in complex circuits that more closely resemble highly interconnected graphs like airline routes or the internet, than any electronic circuit typically used in computing.
For this reason, many of the underlying research questions around neural computing center around what aspects of biological neural computing should be emulated, as how to build neural architectures is increasingly well understood. This is a stark contrast to quantum computing, where the math is well understood yet the implementation is an engineering challenge. To a large extent, advancing neural-inspired computing depends on progress from the broader neuroscience community studying what the brain’s computational mechanisms are.
The fact that we are still learning about the brain underlies the uncertainty about which aspects of the brain we want to incorporate into computing. Some scientists look to the brain for hints on how to achieve new cognitive capabilities in computers. Indeed, neuroscience offers potential solutions to many of the artificial intelligence challenges being faced by those developing self-driving cars and smart phones. Alternatively, many engineers are attracted by the brain’s apparent low power consumption relative to conventional systems and take less inspiration from its algorithmic function. These differences of perspective have led to ongoing debates about whether the analog-level computation of individual neurons or the higher level circuit connectivity should be central to a neural computer design.
To witness these debates, one may come to the conclusion that neural-inspired computing is still many years away from making a broad impact. This sentiment is likely incorrect, however, because the field has finally begun to forge ahead without waiting to develop a consensus around what the words ‘neural-inspired’ mean. Neural chips focused on low power exist now, and computer scientists are forging ahead developing algorithms without regard to who wins these debates. As a result, the field is diverse in its approaches and is bursting with a renewed enthusiasm. There are effectively many different approaches that take inspiration from the brain, and the result will likely be a race to provide capabilities we can all benefit from going forward.
Will neural computing be limited to only machine learning?
The communities looking to the brain for algorithmic inspiration and for architectural inspiration have remained somewhat distinct. Nonetheless, there is a growing appreciation in the field that machine learning represents the domain where neural hardware should shine first. Data analytics algorithms such as deep learning are loosely neural-inspired, and thus they architecturally fit many of the neural hardware platforms being developed. Interestingly, it is in the domain of machine learning that neural computing technologies face their biggest competitor, and their biggest role model, the graphics processing unit (GPU).
For many years, sophisticated GPUs were developed primarily for graphics intensive tasks, such as video games and graphic design. GPUs were thus considered a specialized platform and the ability to program them was restricted to those applications. However, in recent years, the general purpose abilities of GPUs have become increasingly appreciated, suggesting that a small-scale supercomputer resided within many desktop machines. Companies such as NVidia and AMD have successfully pivoted their technology to go far beyond these limited markets to show a general applicability of GPUs to tasks such as machine learning.
Today, many computationally intensive tasks are performed with the aid of GPUs, particularly tasks highly dependent on linear algebra. It is not unreasonable to predict that neural-inspired computing technologies will have a similar trajectory, where the first generation of users relies on neural chips to perform specialized tasks, and subsequent generations recognizing that their utility is far broader than originally considered. Therefore, as with GPUs, it is worth considering whether neural hardware will be destined to serve as simple ‘accelerator’ chips for a handful of important but limited applications, like machine learning, or if it will be suitable for a broader set of applications. Some recent results suggest that the latter may actually be true. At the recent Neuro-Inspired Computing Elements (NICE) Workshop, held at IBM Almaden, researchers not only described the use of neural computers for machine learning applications like deep learning, but also highlighted potential roles in computing tasks as conventional as solving optimization problems and matrix multiplication.
Can neural computing replace Moore’s Law?
Is it worth asking whether neural-inspiration can lead computing in the post-Moore’s Law era? On this front, neural-inspired computing methods, and particularly those approaches that look to the brain, have an ace up their sleeve compared to other methods. While Moore’s Law directly described the scaling of transistors, it also reflected a strong cross-industrial technological revolution in which smaller devices led to improved computing capabilities that in turn led to better engineering capabilities that allowed, among many other things, improved devices. As long as power was not a limiting factor, this cyclical process was the engine that kept Moore’s Law going for over fifty years.
Among new approaches to computing, neural approaches are almost uniquely situated to have a similar type of positive feedback loop. Most researchers in neural-inspired computing would agree that the incorporation of neural principles into computing is in its early days. Not only is there a lot of potential in the design of neural hardware platforms, but there is similarly much to be learned from the brain regarding algorithmic approaches. Thus neural computing has room to continue developing even if power considerations prohibit further miniaturization of silicon devices. The fuel for this innovation – our knowledge of the brain – is increasingly accessible as well. The neuroscience community’s understanding of the brain is itself thought to be at a critical point, wherein major initiatives such as the EU Human Brain Project and the US BRAIN Initiative are rapidly incorporating the best technology available – including cutting-edge computing and data analytics methods – to bring about deeper understanding of the brain.
Meanwhile, the broader science and engineering communities can simply prepare themselves for computers to look different in the future. Gone are the days when the only difference between a computer and its previous generation was more RAM and a faster CPU. For many reasons, computing of the future, be it on GPUs or on a neural chip, will likely look more like Google’s TensorFlow than FORTRAN. However, it is far too early to know exactly who will be the winners and the losers; we simply know that in terms of neural-inspired computing we are still in the infancy of the explosion to come.