Medindia LOGIN REGISTER
Medindia

GenX Remote Controls for Robotic Teachers

by Hannah Punitha on Jun 26 2008 6:32 PM

A computer science Ph.D. student at the University of California, San Diego, has shown that it is possible create machines that will speed up or slow down video lectures just by reading a person's facial expressions to judge whether he/she is understanding the lesson.

Jacob Whitehill has revealed that the proof-of-concept demonstration is part of a larger project, aimed at realising an automated facial expression recognition technique to make robots more effective teachers.

In a recent study, Whitehill and his colleagues has shown that it is possible to use information within the facial expressions people make while watching recorded video lectures to predict what is their preferred viewing speed, as well as how difficult a person perceives the lecture at each moment in time.

He says that the team's new study is at the intersection of facial expression recognition research and automated tutoring systems.

"If I am a student dealing with a robot teacher and I am completely puzzled and yet the robot keeps presenting new material, that's not going to be very useful to me. If, instead, the robot stops and says, 'Oh, maybe you're confused,' and I say, 'Yes, thank you for stopping,' that's really good," said Whitehill, who will present his work at the Intelligent Tutoring Systems conference on June 25.

In a pilot study, Whitehill and his colleagues observed that the facial movements made by the eight participants, when they perceived the lecture to be difficult, varied widely from person to person.

However, most of the subjects blinked less frequently during difficult parts of the lecture than during easier portions, which is supported by findings in psychology.

Advertisement
Whitehill said that one of the next steps for the project was to determine what facial movements one person would naturally make when exposed to difficult or easy lecture material, which would be helpful in training a user specific model to predict when a lecture should be sped up or slowed down based on the spontaneous facial expressions.

The researcher said that to collect examples of the kinds of facial expressions involved in teaching and learning, his team used a video conferencing software to record a session wherein a group of people was taught German grammar.

Advertisement
"I wanted to see the kinds of cues that students and teachers use to try to modulate or enrich the instruction. To me, it's about understanding and optimising interactions between students and teachers," he said.

"I can see you nodding right now, for instance. That suggests to me that you're understanding, that I can keep going with what I am saying. If you give me a puzzled look, I might back up for a second," he added.

Source-ANI
SPH


Advertisement