Articulatory Talking Head Can Facilitate Speech Therapy

by Julia Samuel on  October 16, 2017 at 7:31 PM Medical Gadgets
RSS Email Print This Page Comment bookmark
Font : A-A+

A newly developed system can display the movements of our own tongues in real time.
Articulatory Talking Head Can Facilitate Speech Therapy
Articulatory Talking Head Can Facilitate Speech Therapy

A team of researchers in the GIPSA-Lab (CNRS/Université Grenoble Alpes/Grenoble INP) and at INRIA Grenoble Rhône-Alpes developed the system using an ultrasound probe which can be placed under the jaw. These movements are processed by a machine learning algorithm that controls an "articulatory talking head."

Show Full Article

As well as the face and lips, this avatar shows the tongue, palate and teeth, which are usually hidden inside the vocal tract. This "visual biofeedback" system, which ought to be easier to understand and therefore should produce better correction of pronunciation, could be used for speech therapy and for learning foreign languages.

For a person with an articulation disorder, speech therapy partly uses repetition exercises: the practitioner qualitatively analyzes the patient's pronunciations and orally explains, using drawings, how to place articulators, particularly the tongue: something patients are generally unaware of.

How effective therapy is depends on how well the patient can integrate what they are told. It is at this stage that "visual biofeedback" systems can help. They let patients see their articulatory movements in real time, and in particular how their tongues move, so that they are aware of these movements and can correct pronunciation problems faster.

For several years, researchers have been using ultrasound to design biofeedback systems. The image of the tongue is obtained by placing under the jaw a probe similar to that used conventionally to look at a heart or fetus. This image is sometimes deemed to be difficult for a patient to use because it is not very good quality and does not provide any information on the location of the palate and teeth.

In this new work, the present team of researchers propose to improve this visual feedback by automatically animating an articulatory talking head in real time from ultrasound images. This virtual clone of a real speaker, in development for many years at the GIPSA-Lab, produces a contextualized and therefore more natural visualization of articulatory movements.

The strength of this new system lies in a machine learning algorithm that researchers have been working on for several years. This algorithm can (within limits) process articulatory movements that users cannot achieve when they start to use the system. This property is indispensable for the targeted therapeutic applications.

The algorithm exploits a probabilistic model based on a large articulatory database acquired from an "expert" speaker capable of pronouncing all of the sounds in one or more languages. This model is automatically adapted to the morphology of each new user, over the course of a short system calibration phase, during which the patient must pronounce a few phrases.

This system, validated in a laboratory for healthy speakers, is now being tested in a simplified version in a clinical trial for patients who have had tongue surgery. The researchers are also developing another version of the system, where the articulatory talking head is automatically animated, not by ultrasounds, but directly by the user's voice.

Source: Eurekalert

Post a Comment

Comments should be on the topic and should not be abusive. The editorial team reserves the right to review and moderate the comments posted on the site.
Notify me when reply is posted
I agree to the terms and conditions

Recommended Reading

News A - Z


News Search

Premium Membership Benefits

Medindia Newsletters

Subscribe to our Free Newsletters!

Terms & Conditions and Privacy Policy.

Stay Connected

  • Available on the Android Market
  • Available on the App Store

News Category

News Archive