A new artificial intelligence system called a semantic decoder has the ability to translate a person’s brain activity into a continuous stream of text, while listening to a story or imaging telling a story.
The system was developed by researchers at The University of Texas at Austin who said it might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
The work from the scientist was published in the journal Nature Neuroscience, and relies – in part – on a transformer model that is similar to the ones that Open AI’s ChatGPT and Google’s Bard.
Brain activity is measured using a functional MRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner.
Ph.D. student Jerry Tang prepares to collect brain activity data in the Biomedical Imaging Center at The University of Texas at Austin. The researchers trained their semantic decoder on dozens of hours of brain activity data from participants, collected in an fMRI scanner. Photo (Nolan Zunk/The University of Texas at Austin)
Participants open to having their thoughts decoded later listened to a new story or imagined telling a story, allowing the machine to generate corresponding text from brain activity alone.
While the result is not a word-for-word transcript, it captures the gist of what is being said or thought.
About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely – and sometimes precisely – matches the intended meanings of the original words.
A participant who listened to a speaker say that they do not have their driver’s license yet had their thoughts translated as, “She has not even started to learn to drive yet.”
The researchers said they were getting the model to decode continuous language for extended periods of time with complicated ideas.
Alex Huth (left), Shailee Jain (center) and Jerry Tang (right) prepare to collect brain activity data in the Biomedical Imaging Center at The University of Texas at Austin. The researchers trained their semantic decoder on dozens of hours of brain activity data from participants, collected in an fMRI scanner. (Nolan Zunk/The University of Texas at Austin)
In addition to having participants listen or think about stories, they asked subjects to watch four short, silent videos while inside the scanner, and the semantic decoder was able to use their brain activity to accurately describe certain events from the videos.
Notably, the researchers tested the system on people whom it had not been trained on and found that the results were unintelligible.
Alex Huth (left), discusses the semantic decoder project with Jerry Tang (center) and Shailee Jain (right in the biomedical imaging center at The University of Texas at Austin. (Nolan Zunk/The University of Texas at Austin)
The system currently is not practical for use outside the laboratory because of its reliance on the time need on an fMRI machine. Nevertheless, researchers think this work could transfer to other, more portable brain-imaging systems.
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” study leader Jerry Tang, a doctoral student in computer science, said in a statement. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
The authors said the system could not be used one someone without them knowing and that there are ways someone can protect against having their thoughts decoded – for example, thinking of animals.
“I think right now, while the technology is in such an early state, it’s important to be proactive by enacting policies that protect people and their privacy,” Tang said. “Regulating what these devices can be used for is also very important.”