Muscles and receptors in the mouth and throat retain a memory of their position and feeling when a word is uttered, and their signals provide key input for the brain as it hones the power of speech, they suggest.
Researchers David Ostry and Sazzad Nasir at McGill University in Montreal carried out an unusual experiment into an enduring mystery: why is it that many deaf people are still able to speak coherently, sometimes years after losing their hearing?
They recruited five middle-aged people who had lost their hearing in adulthood and were now profoundly deaf but had a cochlear implant to pick up sounds.
With the implant turned off, the five were asked to repeat four specific words while the front of their lower jaw was gently pulled forwards by a small device attached to their bottom row of front teeth.
The movement was only tiny but it was sufficient to deform the sounds emitted from the volunteers' mouth.
The point of the experiment was to see whether the volunteers were able to adapt to the sudden speech deformation, even if they could not hear the sound they were making.
The four words were "saw," "say," "sass" and "sane," chosen because their vowel, dipthong and fricative -- the hiss of the "s" -- require a very precise jaw position to be pronounced intelligibly.
Even though they were unable to hear the deformed sounds they made, the volunteers progressively learnt to fix the errors in their pronunciation as they ran through a programme of 300 utterances.
In fact, they learned as fast as a comparison group of people of a similar age and with normal hearing who performed the same experiment.
"The deformation (by the machine) is in the orders of millimetres. Even when these individuals can't hear what they're saying, when the motion path of the jaw is changed just a tiny amount, it's enough to prompt a corrective response," Ostry told AFP.
The study is published online by the journal Nature Neuroscience.
He and Nasir attribute the remarkable adaptive power to "mechanoreceptors," or nerves and soft tissues in the vocal tract, which remember how they should feel when a word is pronounced properly.
Ostry said that the discovery shows that the brain corrects our speech through two simultaneous inputs -- through hearing the sound that we make, and also through these subtler feedback signals.
"When a child learns to talk, it gets two kinds of information," Ostry told AFP.
"One is the auditory information, being the sound of its own voice. At the same time, it also gets information from receptors that are in the skin and in the muscles.
"These receptors develop not only in expectation of what words should sound like, they also develop an expectation of what the word should feel like."
Ostry said the experiment focussed only on the muscles of the jaw and facial tissues, but the outcome suggests "mechanoreceptors" are also likely to exist in the lips, the tongue and the muscles of the larynx.
"It is conceivably a basis for speech therapy, absolutely," he said.
"When you're deaf, you've lost one of the two systems that basically support speech, but you still have the other one. And the other one accounts for an enormous proportion for the total sensory inflow that's associated with speech. "It's like having a flat [tyre] and discovering that you have a spare after all."