/ Several generations of neural implants from Neuralink.
For people with limited use of their limbs, speech recognition can be critical for their ability to operate a computer. But for many, the same problems that limit limb motion affect the muscles that allow speech. That had made any form of communication a challenge, as physicist Stephen Hawking famously demonstrated. Ideally, we’d like to find a way to get upstream of any physical activity and identify ways of translating nerve impulses to speech.
In this case, they had access to four individuals who had electrodes implanted to monitor for seizures that happened to be located in parts of the brain involved in speech. The participants were asked to read a set of sentences, which in total contained unique words, while neural activity was recorded by the implants. Some of the participants read from additional sets of sentences, but this first set provided the primary experimental data.
The recordings, along with audio recordings of the actual speech, were then fed into a recurrent neural network, which processed them into an intermediate representation that, after training, captured their key features. That representation was then sent in to a second neural network, which then attempted to identify the full text of the spoken sentence.
How’d it work?
The primary limitation here is the extremely limited set of sentences available for training — even the participant with the most spoken sentences had less than 40 minutes of speaking time. It was so limited that the researchers were afraid the system might end up just figuring out what was being said by tracking how long the system took to speak. And this did cause some problems, in that some of the errors that the system made involved the wholesale replacement of a spoken sentence with the words of a different sentence in the training set.
Still, outside those errors, the system did pretty well considering its limited training. The authors used a measure of performance called a “word error rate,” which is based on the minimum number of changes needed to transform the translated sentence into the one that was actually spoken. For two of the participants, after the system had gone through the full training set, its word error rate was below eight percent, which is comparable to the error rate of human translators.
Disabling different parts of the electrode input confirmed that the key areas that the system was paying attention to were involved in speech production and processing. Within that, a major contribution came from an area of the brain that paid attention to the sound of a person’s own voice to give feedback on whether what was spoken matched the intent of the speaker.
The second thing is that it suggests it’s possible that a significant portion of training could take place with people other than the individual a given system is ultimately used for. This would be critical for those who have lost the ability to vocalize and would significantly decrease the amount of training time any individual needs on the system.
Obviously, none of this will work until getting implants like this is safe and routine. But there’s a bit of a chicken-and-egg problem there, in that there’s no justification for giving people implants without the demonstration of potential benefits. So, even if decades might go by before a system like this is useful, simply demonstrating that it could be useful can help drive the field forward. About DOIs
GIPHY App Key not set. Please check settings