A device called a vocoder that harnesses the power of speech synthesizers and artificial intelligence, could monitor a person’s brain activity to reconstruct the words they hear in their minds.
Neuroengineers from the Zuckerman Institute at Columbia University have developed the new system that translates thought into intelligible and recognizable speech, a discovery that could yield new techniques for computers to communicate directly with the brain and aid those suffering from a variety of diseases and disorders affecting speech including ALS and the effects of a stroke.
“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,” Nima Mesgarani, PhD, the paper’s senior author and a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute, said in a statement. “With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”
It has long been known that brain activity patterns appear when a person speaks or even imagines speaking, as well as when someone listens or imagines listening to another person.
Efforts to harness these effects to decode brain signals have proven challenging, often focusing on simplistic computer models that analyzed spectrograms—visual representations of sound frequencies.
However, this approach does not produce anything nearing intelligible speech, leading the Columbia team to use a computer algorithm that can synthesize speech after being trained on recordings of people talking.
“This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions,” said Mesgarani, who is also an associate professor of electrical engineering at Columbia’s Fu Foundation School of Engineering and Applied Science.
The team taught the vocoder to interpret brain activity by asking epilepsy patients who already were undergoing brain surgery to listen to sentences spoken by different people, while the researchers measured the patterns of brain activity.
The researchers then recorded the brain signals and asked the same patients to listen to speakers reciting digits between zero and nine and fed the measurements through the vocoder. They then analyzed the sound produced by the vocoder in response to the signals and cleaned them up using neural networks.
The researchers ultimately produced a robotic-sounding voice that recites the sequence of numbers.
They tested the accuracy of the recording by having volunteers listen to the recording and report what they heard.
“We found that people could understand and repeat the sounds about 75 percent of the time, which is well above and beyond any previous attempts,” Mesgarani said. “The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy.”
Next, the researchers plan to teach the system on more complicated words and sentences, while running the same type of tests on brain signals. Eventually, they want the system to be implanted into the user’s body to translate thoughts directly into words.
“In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,” Mesgarani said. “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”
The study was published in Scientific Reports.