About My Health Careers Internship MedBlogs Contact us

Articulatory Talking Head Can Facilitate Speech Therapy

by Julia Samuel on October 16, 2017 at 7:31 PM
Font : A-A+

Articulatory Talking Head Can Facilitate Speech Therapy

A newly developed system can display the movements of our own tongues in real time.

A team of researchers in the GIPSA-Lab (CNRS/Université Grenoble Alpes/Grenoble INP) and at INRIA Grenoble Rhône-Alpes developed the system using an ultrasound probe which can be placed under the jaw. These movements are processed by a machine learning algorithm that controls an "articulatory talking head."


As well as the face and lips, this avatar shows the tongue, palate and teeth, which are usually hidden inside the vocal tract. This "visual biofeedback" system, which ought to be easier to understand and therefore should produce better correction of pronunciation, could be used for speech therapy and for learning foreign languages.

For a person with an articulation disorder, speech therapy partly uses repetition exercises: the practitioner qualitatively analyzes the patient's pronunciations and orally explains, using drawings, how to place articulators, particularly the tongue: something patients are generally unaware of.

How effective therapy is depends on how well the patient can integrate what they are told. It is at this stage that "visual biofeedback" systems can help. They let patients see their articulatory movements in real time, and in particular how their tongues move, so that they are aware of these movements and can correct pronunciation problems faster.

For several years, researchers have been using ultrasound to design biofeedback systems. The image of the tongue is obtained by placing under the jaw a probe similar to that used conventionally to look at a heart or fetus. This image is sometimes deemed to be difficult for a patient to use because it is not very good quality and does not provide any information on the location of the palate and teeth.

In this new work, the present team of researchers propose to improve this visual feedback by automatically animating an articulatory talking head in real time from ultrasound images. This virtual clone of a real speaker, in development for many years at the GIPSA-Lab, produces a contextualized and therefore more natural visualization of articulatory movements.

The strength of this new system lies in a machine learning algorithm that researchers have been working on for several years. This algorithm can (within limits) process articulatory movements that users cannot achieve when they start to use the system. This property is indispensable for the targeted therapeutic applications.

The algorithm exploits a probabilistic model based on a large articulatory database acquired from an "expert" speaker capable of pronouncing all of the sounds in one or more languages. This model is automatically adapted to the morphology of each new user, over the course of a short system calibration phase, during which the patient must pronounce a few phrases.

This system, validated in a laboratory for healthy speakers, is now being tested in a simplified version in a clinical trial for patients who have had tongue surgery. The researchers are also developing another version of the system, where the articulatory talking head is automatically animated, not by ultrasounds, but directly by the user's voice.

Source: Eurekalert

News A-Z
News Category
What's New on Medindia
Menstrual Disorders
Coffee May Help You Fight Endometrial Cancer
Fermented Skin Care
View all

Medindia Newsletters Subscribe to our Free Newsletters!
Terms & Conditions and Privacy Policy.

More News on:
Reiki and Pranic Healing Stuttering Aphasia Vocal Cord Dysfunction Motherese Hearing Loss in Young Children and Early Intervention Language Areas in The Brain Dysarthria 

Recommended Reading
One Week of Speech Therapy may Reorganize Brain: Study
In a recent study it was found that just one week of speech therapy can reorganize brain and reduce ...
Experts Call for State Subsidy for Cochlear Implants
Cochlear implants offer a way out of hearing impairment. They are surgically implanted electronic .....
Voices of the Deaf
On the occasion of World Deaf Awareness Day, Medindia spoke to the Principal, Sister Phincita of ......
Brain Behind Stuttering - Key Circuits of Speech Affected
Scientists imaged the brain of people who stutter and find that key brain circuits linked to speech ...
Aphasia is a condition where the patient has a language disorder. The patient has problems with comp...
Dysarthria is a condition characterized by speech difficulty due to poor or weak speech muscle ......
Hearing Loss in Young Children and Early Intervention
Early identification of hearing loss in infants and young children and treating hearing impairment c...
Language Areas in The Brain
The mechanism of how human brain processes the language to express and comprehend the verbal, writte...
Motherese, a special speech and language pattern adopted by people when talking to infants and small...
Stuttering, stammering or disfluency is a speech disorder that can hamper communication and affect a...
Vocal Cord Dysfunction
Vocal cord malfunction, commonly known as vocal cord dysfunction is a condition where the vocal cord...

Disclaimer - All information and content on this site are for information and educational purposes only. The information should not be used for either diagnosis or treatment or both for any health related problem or disease. Always seek the advice of a qualified physician for medical diagnosis and treatment. Full Disclaimer

© All Rights Reserved 1997 - 2022

This site uses cookies to deliver our services. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Use
open close
I have read and I do accept terms of use - Telemedicine

Advantage Medindia: FREE subscription for 'Personalised Health & Wellness website with consultation' (Value Rs.300/-)