A new technology has been developed, which allows users to interact in a virtual environment using mouth gestures, reveals a new study.
The proliferation of affordable virtual reality head-mounted displays provides users with realistic immersive visual experiences. However, head-mounted displays occlude the upper half of a user's face and prevent facial action recognition from the entire face.
The study was conducted by the research team at Binghamton University, State University of New York. To combat this issue, Binghamton University Professor of Computer Science Lijun Yin and his team created a new framework that interprets mouth gestures as a medium for interaction within virtual reality in real-time.
Players had to select their movement direction using head rotation, move using mouth gestures and could only eat the cake by smiling. The system was able to describe and classify the user's mouth movements, and it achieved high correct recognition rates. The system has also been demonstrated and validated through a real-time virtual reality application.
"We hope to make this applicable to more than one person, maybe two. Think Skype interviews and communication," said Yin.
"Imagine if it felt like you were in the same geometric space, face to face, and the computer program can efficiently depict your facial expressions and replicate them so it looks real."
Though the tech is still in the prototype phase, Yin believes his technology is applicable to a plethora of fields.
"The virtual world isn't only for entertainment. For instance, health care uses VR to help disabled patients," said Yin.
"Medical professionals or even military personal can go through training exercises that may not be possible to experience in real life. This technology allows the experience to be more realistic."
Students Umur Aybars Ciftci and Xing Zhang contributed to this research.
The paper, "Partially occluded facial action recognition and interaction in virtual reality applications," was presented at the 2017 IEEE International Conference on Multimedia and Expo.