2015
DOI: 10.1371/journal.pcbi.1004294
|View full text |Cite|
|
Sign up to set email alerts
|

The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

Abstract: In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
17
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(18 citation statements)
references
References 53 publications
(113 reference statements)
1
17
0
Order By: Relevance
“…Here we found a robust contralateral bias in the BOLD contrast to equidistant hemifield sectors in both anesthetized and awake monkeys, suggesting that the lack of contralaterality in some previous neuroimaging studies in humans might be due to differences in sound stimulation, i.e., sounds relying on ITD (Krumbholz et al, 2007) or ILD cues alone, and might not be due to an inherent lack of functional sensitivity in fMRI (Werner-Reiss and Groh, 2008). Furthermore, our stimulation design consisted of individualized (in-ear) binaural sound recordings and the bias we obtained in our contralaterality measures is in accordance with human neuroimaging studies utilizing individualized spatial sounds (Derey et al, 2016; M1ynarski, 2015; Palomäki et al, 2005; Salminen et al, 2009). …”
Section: Discussionmentioning
confidence: 83%
See 1 more Smart Citation
“…Here we found a robust contralateral bias in the BOLD contrast to equidistant hemifield sectors in both anesthetized and awake monkeys, suggesting that the lack of contralaterality in some previous neuroimaging studies in humans might be due to differences in sound stimulation, i.e., sounds relying on ITD (Krumbholz et al, 2007) or ILD cues alone, and might not be due to an inherent lack of functional sensitivity in fMRI (Werner-Reiss and Groh, 2008). Furthermore, our stimulation design consisted of individualized (in-ear) binaural sound recordings and the bias we obtained in our contralaterality measures is in accordance with human neuroimaging studies utilizing individualized spatial sounds (Derey et al, 2016; M1ynarski, 2015; Palomäki et al, 2005; Salminen et al, 2009). …”
Section: Discussionmentioning
confidence: 83%
“…In the macaque, neurons in area CL were found to be sharply tuned to azimuth position in the frontal hemifield and were significantly more selective than in other fields (Tian et al, 2001; Woods et al, 2006). However, recent data in both monkeys (Werner-Reiss and Groh, 2008) and humans (Salminen et al, 2009; Magezi and Krumbholz, 2010; M1ynarski, 2015; Derey et al, 2016) suggest that acoustic space is also represented by broadly tuned neurons distributed more widely across AC.…”
Section: Introductionmentioning
confidence: 99%
“…For example, the spatial sensitivity of a relatively small proportion of the neurons recorded in different cortical areas is affected when cats carry out an auditory task, implying that a specific subset of the neuronal population may be particularly important during behavior depending on the stimulus or task involved [4]. Similarly, studies of sound localization in complex or abnormal hearing conditions can provide further constraints for deciding between candidate population codes, with recent work highlighting the importance of neuronal heterogeneity within each hemisphere for representing sound source location [9 , [93][94][95].…”
Section: Future Directionsmentioning
confidence: 99%
“…The use of available audio recordings had the additional consequence that our analysis was restricted to monaural audio. Natural auditory input likely contains important binaural dependencies that contribute to grouping [40,46,47,48,49], that our approach could in principle capture if applied to audio recorded from two ears [50]. Another limitation of our approach lies in the use of sparse feature decodings, which efficiently describe speech and music sounds, but are a poor description of more noise-like sounds such as textures.…”
Section: Open Issues and Future Directionsmentioning
confidence: 99%