
The mechanism that helps the brain to to locate where a sound is coming from has been identified by researchers.
Animals have to capacity to locate the source of a sound by detecting microsecond (one millionth of a second) differences in arrival time at their two ears.
Advertisement
Scientists have now discovered that one reason these neurons are able to perform such a rapid and sensitive computation is because they are extremely responsive to the input's "rise time"-the time it takes to reach the peak of the synaptic input.
The brain computes the different arrival times of sound into each ear to estimate the location of its source.
The neurons encoding these differences-called interaural time differences (ITDs)-receive a message from each ear. After receiving these messages, or synaptic inputs, they perform a microsecond computation to determine the source of sound.
The researchers examined this process in gerbils, which are good candidates for study because they process sounds in a similar frequency range and with apparently similar neuro-architecture as humans.
They showed that the rise times of the synaptic inputs coming from the two ears occur at different speeds: the rise time of messages coming from the ipsilateral ear are faster than those driven by the contralateral ear (the brain has two groups of neurons that compute this task, one group in each brain hemisphere-ipsilateral messages come from the same-side ear and the contralateral messages come from opposite-side ear). In addition, they found that the arrival time of the messages coming from each ear were different.
Given this newfound complexity of the way sound reaches the neurons in the brain, the researchers concluded that neurons did not have the capacity to process it in the way previously theorized.
Key insights about how these neurons actually function in processing sound coming from both ears were obtained by using the computer model.
Their results identified that neurons perform the computation differently than what neuroscientists had proposed previously.
These neurons not only encode the coincidence in arrival time of the two messages from each ear, but they also detect details on the input's shape more directly related to the time scale of the computation itself than other features proposed in previous studies.
"Some neurons in the brain respond to the net amplitude and width of summed inputs; they are integrators. These auditory neurons respond to the rise time of the summed input and care less about the width. They are differentiators-key players on the brain's calculus team for localizing a sound source," explained Pablo Jercog and John Rinzel, two of the study's co-authors.
The findings will be published in PLoS Biology.
Source: ANI
The neurons encoding these differences-called interaural time differences (ITDs)-receive a message from each ear. After receiving these messages, or synaptic inputs, they perform a microsecond computation to determine the source of sound.
Advertisement
The researchers examined this process in gerbils, which are good candidates for study because they process sounds in a similar frequency range and with apparently similar neuro-architecture as humans.
They showed that the rise times of the synaptic inputs coming from the two ears occur at different speeds: the rise time of messages coming from the ipsilateral ear are faster than those driven by the contralateral ear (the brain has two groups of neurons that compute this task, one group in each brain hemisphere-ipsilateral messages come from the same-side ear and the contralateral messages come from opposite-side ear). In addition, they found that the arrival time of the messages coming from each ear were different.
Given this newfound complexity of the way sound reaches the neurons in the brain, the researchers concluded that neurons did not have the capacity to process it in the way previously theorized.
Key insights about how these neurons actually function in processing sound coming from both ears were obtained by using the computer model.
Their results identified that neurons perform the computation differently than what neuroscientists had proposed previously.
These neurons not only encode the coincidence in arrival time of the two messages from each ear, but they also detect details on the input's shape more directly related to the time scale of the computation itself than other features proposed in previous studies.
"Some neurons in the brain respond to the net amplitude and width of summed inputs; they are integrators. These auditory neurons respond to the rise time of the summed input and care less about the width. They are differentiators-key players on the brain's calculus team for localizing a sound source," explained Pablo Jercog and John Rinzel, two of the study's co-authors.
The findings will be published in PLoS Biology.
Source: ANI
Advertisement
Advertisement
|
Advertisement
Recommended Readings
Latest Research News

Malaria parasites sync their molecular rhythms with the internal 24-hour clocks of their hosts, said researchers.

A latest research suggests that the quality of a person's post-work recovery in the evening can impact their mood when they resume work the next day.

Researchers are working on new strategies and solutions for sleep apnea to ward off a range of health risks including cognitive decline.

The scientists discovered that when the stem cells in the hair follicle are made softer, they have a higher chance of growing hair.

A combinatorial therapy provided promising beneficial results among people with ischemic stroke.