A new computer system is being developed by researchers at Massachusetts Institute of Technology (MIT), that can automatically screen young children for speech and language disorders and potentially even provide specific diagnoses.
Early-childhood intervention for children with speech and language disorders can make a difference in their later academic and social success, the study said. To build the new system researchers used machine learning in which a computer searches large sets of training data for patterns that diagnoses speech and language disorders.
‘Early-childhood intervention for children with speech and language disorders can make a difference in their later academic and social success.’
The system analyses audio recordings of children's performances on a standardised storytelling test, in which they are presented with a series of images and an accompanying narrative, and then asked to retell the story in their own words.
"The really exciting idea here is to be able to do screening in a fully automated way using very simplistic tools. You could imagine the storytelling task being totally done with a tablet or a phone. I think this opens up the possibility of low-cost screening for large numbers of children," said John Guttag, former Professor at the MIT.
The researchers evaluated the system's performance using a standard measure called area under the curve, which describes the tradeoff between exhaustively identifying members of a population who have a particular disorder, and limiting false positives.
"Assessing children's speech is particularly challenging because of high levels of variation even among typically developing children. You get five clinicians in the room and you might get five different answers," Guttag added.
Unlike speech impediments, speech and language disorders both have neurological bases. But the investigators explains, they affect different neural pathways -- speech disorders affect the motor pathways, while language disorders affect the cognitive and linguistic pathways.
The researchers had hypothesised that pauses in children's speech, as they struggled to either find a word or string together the motor controls required to produce it, were a source of useful diagnostic data.
They identified a set of 13 acoustic features of children's speech that their machine-learning system could search, seeking patterns that correlated with particular diagnoses. These were things like the number of short and long pauses, the average length of the pauses, the variability of their length, and similar statistics on uninterrupted utterances.
The machine-learning system was trained on three different tasks: identifying any impairment, whether speech or language; identifying language impairments; and identifying speech impairments.
The children whose performances on the storytelling task were recorded in the data set had been classified as typically developing, as suffering from a language impairment, or as suffering from a speech impairment.