According to Karen Iler Kirk, a professor of speech, language and hearing sciences, the traditional way to assess speech understanding in people with hearing loss is to put them in a quiet room and ask them to repeat words produced by one person they can't see.
However, Kirk adds that the new research is intended at developing "new tests that reflect more natural listening situations with visual cues, different background noises, voice quality, dialects and speaking rates."
"This is a more accurate way to predict how people perceive speech in the real world and, therefore, can help us determine appropriate therapy and interventions, such as cochlear implants," she adds.
Kirk received a 2.8 million dollars grant from the National Institute on Deafness and Other Communication Disorders for the five-year project to develop two new audiovisual and multi-talker sentence tests that expand upon the traditional spoken word recognition format that has been used since the 1950s. One test is for adults and the other for children. More than 1,000 people ages 4-65 will participate in the study.
"The traditional spoken word recognition format has been used to determine the need for some sensory aids, such as hearing aids, which are used to amplify sound. However, it is not the best method for assessing the benefits of other sensory aids, such as the more expensive cochlear implants," Kirk said.
A cochlear implant is an electronic device that can provide a sense of sound to someone who is deaf or severely hard of hearing. The device, which is surgically implanted, picks up and processes sound that is converted into electric impulses that are sent to the auditory nerve. More than 100,000 people worldwide have received cochlear implants, and more health insurance companies are paying for the surgery and therapy, Kirk said.
Kirk added that the project is also expanding word lists from the traditional monosyllabic words to a greater range of words based on how often they are used and lexical density - the number of words phonetically similar to the target.
"It's important to use sentence materials that are produced by different speakers because in the real world, we do not listen to just one person," Kirk said.
In addition to the auditory component, the materials will be presented in a visual format so listeners can see and hear the phrase.
"This is really important because hearing-impaired people often have great difficulty understanding speech if they are just listening. Seeing the face and following lip reading cues can help someone understand the intended message," she said.
Participants will be tested in auditory-only, visual-only or auditory plus visual modalities. At the end of the project, DVDs containing the test, as well as instruction booklets, data-gathering forms and a manual for data interpretation, will be available to professionals.