Someday Neuroscientists may be able to heed to the steady, internal soliloquy trotting through our minds or attend to the illusory speech of a stroke or a confined patient with speech inability state researchers at the University of California, Berkeley. The work, conducted in the labs of Robert Knight at Berkeley and Edward Chang at UCSF, is reported Jan 31 in the open-access journal PLoS Biology.
The scientists have succeeded in decoding electrical activity in a region of the human auditory system called the superior temporal gyrus (STG). By analyzing the pattern of STG activity, they were able to reconstruct words that subjects listened to in normal conversation.
"This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease and can't speak," said Knight, Professor of Psychology and Neuroscience at UC Berkeley. "If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."
Pasley tested two different methods to match spoken sounds to the pattern of activity in the electrodes. The patients then heard a single word and Pasley used two different computational models to predict the word based on electrode recordings. The better of the two methods was able to reproduce a sound close enough to the original word for him and fellow researchers to correctly guess the word better than chance.
"We think we would be more accurate with an hour of listening and recording and then repeating the word many times," Pasley said. But because any realistic device would need to accurately identify words the first time heard, he decided to test the models using only a single trial.
"I didn't think it could possibly work, but Brian did it," Knight said. "His computational model can reproduce the sound the patient heard and you can actually recognize the word, although not at a perfect level."
The ultimate goal of the study was to explore how the human brain encodes speech and determine which aspects of speech are most important for understanding.
"At some point, the brain has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound," Pasley said. "The big question is, what is the most meaningful unit of speech? A syllable, a phone, a phoneme? We can test these hypotheses using the data we get from these recordings."
In the accompanying Podcast, PLoS Biology Editor Ruchir Shah sits down with Brian Pasley and Robert Knight to discuss their main findings, the applications for neural prosthetics, as well as the potential ethical implications for "mind-reading".