Researchers at the University of Rochester have demonstrated for the first time that our brains automatically predict many possible words and their meanings before we've even heard the final sound of the word.
Using an MRI scanner, the researchers have been able to actually see this split-second brain activity.
"We had to figure out a way to catch the brain doing something so fast that it happens literally between spoken syllables," said Michael Tanenhaus, the Beverly Petterson Bishop and Charles W. Bishop Professor.
The researchers focused on a tiny part of the brain called 'V5,' which is known to be activated when a person sees motion.
The idea was to teach undergraduates a set of invented words, some of which meant 'movement,' and then to watch and see if the V5 area became activated when the subject heard words that sounded similar to the ones that meant 'movement.'
For that, the researchers had to create a set of words that had similar beginning syllables, but with different ending syllables and distinct meanings-one of which meant motion of the sort that would activate the V5 area.
The team created a computer program that showed irregular shapes and gave the shapes specific names, like "goki." They also created new verb words. Some, like "biduko" meant "the shape will move across the screen," whereas some, like "biduka," meant the shape would just change color.
After a number of students learned the new words well enough, the team tested them as they lay in an fMRI scanner. The students would see one of the shapes on a monitor and hear "biduko," or "biduka."
Though only one of the words actually meant "motion," the V5 area of the brain still activated for both, although less so for the color word than for the motion word.
The presence of some activation to the color word shows that the brain, for a split-second, considered the motion meaning of both possible words before it heard the final, discriminating syllable-ka rather than ko.
The researchers are already planning more sophisticated versions of the test that focus on other areas of the brain besides V5-such as areas that activate for specific sounds or touch sensations.
They are also planning to watch the brain sort out meaning when it is forced to take syntax into account.
"This opens a doorway into how we derive meaning from language. This is a new paradigm that can be used in countless ways to study how the brain responds to very brief events. We're very excited to see where it will lead us," said Tanenhaus.