A new system that adapts cell phone sound processing hold promise to bring cochlear implant technology closer to offering the best of both acoustical worlds: speech and music, in people with hearing problems.
Compared with the conventional cochlear-implant sound sampling strategy, the new scheme significantly improved melody perception. In a test of nine subjects wearing cochlear implants who were asked to identify 10 melodies, results showed 10-20 percent improvement.
Notes lead researcher Fan-Gang Zeng, research director of the Hearing and Speech Lab at University of California, Irvine: "One potential application of this scheme is to one day integrate cochlear implants with smartphones so that future users can not only get better performance, but also seamless communication. Imagine one device that helps you hear and connects all."
In the current cochlear implant pitch-encoding schemes for rendering melody, the original sound signals are significantly altered. This produces potentially detrimental effects on speech perception, which means improvement in hearing music comes at a cost to hearing speech.
To overcome this, the new approach takes advantage of spectral constancy, which refers to unaltered tone-quality perception. It is achieved by preserving the spatial position voiced sounds occupy in a given timeframe, while altering the timeframe of pitch cycles. This minimizes distortion of the sound signals of both speech and music.
The findings will be presented at the Acoustical Society of America's 161st annual meeting in Seattle, Washington.