Researchers have developed a computational model of a neural circuit in the brain that could help explain the biological role of inhibitory neurons, which are neurons that keep other neurons from firing.
The model, developed at the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory, describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons.
The neural circuit performs a “winner-take-all” operation where signals from multiple input neurons induce a signal in just one output neuron.
The computer model used makes empirical predictions about the behavior of inhibitory neurons in the brain, offering a good example of the way in which computational analysis could aid neuroscience.
The researchers were able to show a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation.
Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT and senior author of the paper, explained some of the science behind the research.
“There’s a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems,” Lynch said in a statement. “We’re trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties.”
Scientists have relied on artificial neural networks—computer models roughly based on the structure of the brain— for rapid improvement in artificial intelligence systems in recent years, from speech transcription to face recognition software.
An artificial neural network consists of nodes that have limited information-processing power but are densely interconnected. If the data received by a given node meet the threshold criterion the nodes will fire or send signals along all of its outgoing connections.
Each of the outgoing connections has an associated weight, which can augment or diminish a signal.
In the next layer of the network each node receives weighted signals from multiple nodes in the first layer. The nodes are then added together and if their sum exceeds a certain threshold it fires and the outgoing signals pass to the next layer and so on.
However, in artificial-intelligence applications, a neural network is “trained” on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to a computational problem.
Lynch and the research team made several modifications to the design to make it more biologically plausible including the addition of inhibitory neurons.
In a standard artificial neural network, the values of the weights on the connection are usually positive or capable of being either positive or negative, but in the brain, some neurons appear to play a purely inhibitory role, which prevents other neurons from firing.
The researchers modeled those neurons only as nodes whose connections have only negative weights.
The network is also probabilistic, meaning increasing the strength of the signal traveling over an input neuron only increases the chances that an output neuron will fire.
The researchers also showed that with only one inhibitory neuron it is impossible in the context of the model to enact the winner-take-all strategy, but it can with two inhibitory neurons.
One of the inhibitory neurons sends a strong inhibitory signal if more than one output neuron is firing while the other neuron sends a much weaker signal as long as any output neuron is firing.
It was determined that the minimum number of auxiliary neurons required to guarantee a particular convergence speed and the maximum convergence speed possible given a particular number of auxiliary neurons.