MIT neuroscientists have reported on a new method to analyze brain imaging data - one that may paint a clearer picture of how our brain produces and understands language.
MIT neuroscientists have reported on a new method to analyze brain imaging data - one that may give a clearer picture of how our brain produces and understands language. The boffins explained their finding in the Journal of Neurophysiology.
Research with patients who developed specific language deficits (such as the inability to comprehend passive sentences) following brain injury suggest that different aspects of language may reside in different parts of the brain. But attempts to find these functionally specific regions of the brain with current neuroimaging technologies have been inconsistent and controversial.
One reason for this inconsistency may be due to the fact that most previous studies relied on group analyses in which brain imaging data were averaged across multiple subjects - a computation that could introduce statistical noise and bias into the analyses.
"Because brains differ in their folding patterns and in how functional areas map onto these folds, activations obtained in functional MRI studies often do not precisely 'line up' across brains," explained Evelina Fedorenko, first author of the study and a postdoctoral associate in Nancy Kanwisher's lab at the McGovern Institute for Brain Research at MIT.
"Some regions of the brain thought to be involved in language are also geographically close to regions that support other cognitive processes like music, arithmetic, or general working memory. By spatially averaging brain data across subjects you may see an activation 'blob' that looks like it supports both language and, say, arithmetic, even in cases where in every single subject these two processes are supported by non-overlapping nearby bits of cortex."
The only way to get around this problem, according to Fedorenko, is to first define "regions of interest" in each individual subject and then investigate those regions by examining their responses to various new tasks. To do this, they developed a "localizer" task where subjects read either sentences or sequences of pronounceable nonwords.
Advertisement
"This new, more sensitive method allows us now to investigate questions of functional specificity between language and other cognitive functions, as well as between different aspects of language," Fedorenko concludes. "We're more likely to discover which patches of cortex are specialized for language and which also support other cognitive functions like music and working memory. Understanding the relationship between language and the rest of condition is one of key questions in cognitive neuroscience."
Advertisement
THK