People afflicted with the loss of speech due to injury or stroke have been given hope that they may be able to communicate with words in the future, thanks to a groundbreaking trial at the University of California San Francisco’s Neuroscience Institute
The aim of the BRAVO1 study - which stands for Brain-Computer Interface Restoration of Arm and Voice - is to translate electrical signals sent from the brain to the vocal tract into text on a screen.
In order to do this, a device is fitted to the subject’s brain which enables scientists to identify words from a limited 50-word vocabulary using advanced computer algorithms.
Mapping brain patterns
The team behind the project has been able to identify this specific set of words through years of research in mapping brain activity patterns associated with distinct vocal tract movements.
"The cortex controls a lot of really important human behaviour such as speech," Dr David Moses, the study’s lead author, explained.
"By placing sensors, electrical sensors on the surface of the brain and the area that normally controls signals sent to the vocal track… you can actually pick up those electrical signals, record that activity and digitise them and send them to our computer for further processing," he said.
According to the researchers, people normally communicate at between 150 and 200 words per minute. With its so far limited vocabulary, this system allows the subject to communicate 2 to 4 words per second.
One of the key challenges to making this system work effectively, however, is the sheer volume of electrical signals the brain sends at any given time.
For researchers on this project, it is often difficult to separate out the signals that pertain to speech from the rest of the brain’s processes.
Trial is in its early stages
"There is still a lot of what we refer to as noise, which could be anything from just random electrical fields that are also detected by the electrodes or just separate brain activity that's happening in that brain region that's unrelated to the task," said Moses.
"One of the challenges is to use these advanced machine learning methods to try and extract the relevant features and the relevant types of information from the brain signals, even though there is a lot of other interference, if you will".
While Professor Edward Chang, a neurosurgeon also working on the project, admits that the trial is in the early stages, he is nevertheless enthusiastic about the potential of this technology to address the needs of those who have been touched by speech loss.
"It's devastating, you know, to not be able to communicate basic needs, so I think that the need is extraordinary. I think that we're on our way to figuring out how to address some of these needs," Chang said.
For more on this story, watch the video in the media player above.