A new study has revealed that it is possible to predict what people are going to say by tracking their eye movements.
In their study, Moreno Coco and Frank Keller at the University of Edinburgh, UK, presented 24 volunteers with a series of photo-realistic images depicting indoor scenes such as a hotel reception.
They then tracked the sequence of objects that each volunteer looked at after being asked to describe what they saw.
Other than being prompted with a keyword, such as "man" or "suitcase", participants were free to describe the scene however they liked. Some typical sentences included "the man is standing in the reception of a hotel" or "the suitcase is on the floor".
The order in which a participant's gaze settled on objects in each scene tended to mirror the order of nouns in the sentence used to describe it.
"We were surprised there was such a close correlation," New Scientist quoted Keller as saying.
Given that multiple cognitive processes are involved in sentence formation, Coco said that "it is remarkable to find evidence of similarity between speech and visual attention".
The team used the discovery to see if they could predict what sentences would be used to describe a scene based on eye movement alone. They developed an algorithm that was able to use the eye gazes recorded from the previous experiment to predict the correct sentence from a choice of 576 descriptions.
These results could motivate novel designs for human-machine interfaces that take advantage of visual cues to improve speech recognition software, suggested Changsong Liu of Michigan State University's Language and Interaction Research lab, in East Lansing, who was not involved in the study.