The human cerebellum is a mysterious thing. Responsible for motor control, it’s the reason why we can walk, run, or learn to hit a baseball without having to consciously think through the mechanics of what we’re doing. These are some of the tasks that robots — with their ‘electronic’ brains — struggle with most.
Now a pair of researchers in Japan has used GPUs and the CUDA parallel programming model to create a 100,000 neuron simulation of the human cerebellum, one of the largest simulations of its kind in the world. And they’ve put their model to the test by applying this knowledge to teach a robot to learn to hit a ball.
Tadashi Yamazaki at the University of Electro-Communications in Tokyo, and Jun Igarashi at Okinawa Institute of Science and Technology Graduate University in Okinawa, recently issued a paper detailing how they used NVIDIA GPUs to build a large-scale network model of the human cerebellum. They began this work while at the RIKEN Brain Science Institute near Tokyo, a top international center for advanced brain research.
The two believe that modeling the cerebellum could help robots move around more easily and learn to respond autonomously to their environments, a problem that has proven to be a daunting problem for conventional approaches. And in turn, they hope to shed more light on how cerebellum motor control works.
Their work is part of a subfield of robotics called “biomimetic” robotics that aims to help robots deal with ever-changing environments by mimicking some of the ways that humans solve these problems.
According to Igarashi, their work involved modeling realistic neural brain function to enable the robot to interact with its environment, which is no easy task.
“Our physical actions change the environment, which changes the sensory input to human brain our sensation. The brain then processes this changed sensory information and determines what action to take. It is called the ‘sensorimotor loop,’” Igarashi explains. “The brain must continue to choose appropriate actions on the basis of gradually-changing sensory information.”
One of the biggest challenges in modeling neural brain function: simulation speed. Using a CPU alone it took 98 seconds of compute time to figure out how to respond to a stimulus lasting just one second. Using GPUs resulted in a 100x speedup, giving the GPU-based system the speed needed to handle real world tasks.
To show their system in action, the researchers demonstrated their robotic system learning — in real time — how to hit a small plastic ball thrown by a toy pitching machine with a round plastic racket.
The robot’s task is to learn the timing needed to hit a flying ball, mimicking the sort of visual thinking humans use to quickly learn how to navigate through the real world. “When the ball speed is changed the robot forgets the learned timing and relearns the new timing, rather than just repeating what it learned before.” Yamazaki says.
To be sure, none of this will result in a robot that can walk into a batting cage and start hitting dongs anytime soon. It could be years before scientists agree on a standard model of how the cerebellum works, and putting the results into a working robot would require this work to be integrated into a larger system — no easy task.
Yet, GPUs gave Yamazaki and his colleagues a big leap in this direction, by making it possible for them to run their model on a PC equipped with a single off-the-shelf NVIDIA GPU. None of this required any exotic or expensive hardware.
Up next for Yamazaki and Igarashi will be advancing their brain research even further, with the ultimate goal of expanding their cerebellum model until they have a complete understanding of how this area of the brain works.
Armed with this data, researchers can better understand human motor function, the interaction between the cerebellum and other parts of the brain, and potentially uncover the causes of motor neuron diseases.
And what’s next in the area of biomimetic robotics?
Yamazaki believes his work could result in robots within 5 years that rely on a silicon cerebellum that will allow them to “think” — that is, they would be able to assess their environment and organize movements autonomously.
“GPUs would play an essential role… because in my opinion, GPUs are the supercomputer for the rest of us,” says Yamazaki.
Not bat.