Researchers at Carnegie Mellon University have determined how the brain deciphers meanings of nouns by combining brain imaging and machine learning techniques.
Neuroscientists Marcel Just and Vladimir Cherkassky and computer scientists Tom Mitchell and Sandesh Aryal, claimed that understanding how the brain codes nouns is important for treating psychiatric and neurological illnesses.
"In effect, we discovered how the brain's dictionary is organized. It isn't alphabetical or ordered by the sizes of objects or their colors. It's through the three basic features that the brain uses to define common nouns like apartment, hammer and carrot," said Just.
According to researchers, the three codes or factors concern basic human fundamentals are- how you physically interact with the object (how you hold it, kick it, twist it, etc.); how it is related to eating (biting, sipping, tasting, swallowing); and how it is related to shelter or enclosure.
The three factors, each coded in three to five different locations in the brain, were found by a computer algorithm that searched for commonalities among brain areas in how participants responded to 60 different nouns describing physical objects.
For example, the word apartment evoked high activation in the five areas that code shelter-related words.
In the case of hammer, the motor cortex was the brain area activated to code the physical interaction. "
"To the brain, a key part of the meaning of hammer is how you hold it, and it is the sensory-motor cortex that represents 'hammer holding,'" said Cherkassky.
The study also showed that the noun meanings were coded similarly in all of the participants' brains.
"This result demonstrates that when two people think about the word 'hammer' or 'house,' their brain activation patterns are very similar. But beyond that, our results show that these three discovered brain codes capture key building blocks also shared across people," said Mitchell.
The study marked the first time that the thoughts stimulated by words alone were accurately identified using brain imaging.
On the other hand, earlier studies used picture stimuli or pictures together with words.
The programs could identify the thought without benefit of a picture representation in the visual area of the brain, focusing instead on the semantic or conceptual representation of the objects.
In addition, the team was able to predict where the activation would be for a previously unseen noun.
The study has been published in the journal PLoS One.