A new way developed by researchers helps generate more realistic and accurate expressions of pain on the face of medical training robots.
A new way developed by researchers helps generate more realistic and accurate expressions of pain on the face of medical training robots during the physical examination of painful areas. The findings, published in the journal Scientific Reports, suggest this could also help teach trainee doctors to use clues hidden in patient facial expressions to minimize the force necessary for physical examinations, and may also help to detect and correct early signs of bias in medical students by exposing them to a wider variety of patient identities.
"Improving the accuracy of facial expressions of pain on these robots is a key step in improving the quality of physical examination training for medical students," said Sibylle Rerolle, from Imperial’s Dyson School of Design Engineering.
In the study, undergraduate students were asked to perform a physical examination on the abdomen of a robotic patient. Data about the force applied to the abdomen was used to trigger changes in six different regions of the robotic face — known as MorphFace — to replicate pain-related facial expressions.
This method revealed the order in which different regions of a robotic face, known as facial activation units (AUs), must trigger to produce the most accurate expression of pain. The study also determined the most appropriate speed and magnitude of AU activation.
The researchers found that the most realistic facial expressions happened when the upper face AUs (around the eyes) were activated first, followed by the lower face AUs (around the mouth). In particular, a longer delay in activation of the Jaw Drop AU produced the most natural results.
When doctors conduct physical examination of painful areas, the feedback of patient facial expressions is important. However, many current medical training simulators cannot display real-time facial expressions relating to pain and include a limited number of patient identities in terms of ethnicity and gender.
Advertisement
"Underlying biases could lead doctors to misinterpret the discomfort of patients — increasing the risk of mistreatment, negatively impacting doctor-patient trust, and even causing mortality," said co-author Thilina Lalitharatne, from the Dyson School of Design Engineering.
Advertisement
Source-IANS