2016
DOI: 10.1073/pnas.1612524113
|View full text |Cite
|
Sign up to set email alerts
|

Statistics of natural reverberation enable perceptual separation of sound and space

Abstract: In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of soun… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

7
130
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
2
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 128 publications
(142 citation statements)
references
References 52 publications
7
130
1
Order By: Relevance
“…Still, a source/reverberant space separation operation by the auditory system would facilitate computing DRR for distance perception. Our data do not address this question directly but, along with Traer and McDermott (2016), suggest an intriguing counterpoint to interpretations that DRR computation is ill-posed and thus unlikely (Kopco and Shinn-Cunningham, 2011), or is bypassed via perception of other covarying cues (Larsen et al, 2008). …”
Section: Discussioncontrasting
confidence: 84%
See 1 more Smart Citation
“…Still, a source/reverberant space separation operation by the auditory system would facilitate computing DRR for distance perception. Our data do not address this question directly but, along with Traer and McDermott (2016), suggest an intriguing counterpoint to interpretations that DRR computation is ill-posed and thus unlikely (Kopco and Shinn-Cunningham, 2011), or is bypassed via perception of other covarying cues (Larsen et al, 2008). …”
Section: Discussioncontrasting
confidence: 84%
“…Recent behavioral work suggests that the auditory system performs a scene analysis operation in which natural reverberation is separated from the originating sound source and analyzed to extract environmental information (Traer and McDermott, 2016). The neural basis of that operation, however, remains largely unexplored.…”
Section: Introductionmentioning
confidence: 99%
“…Although our methodology starts from an encoding scheme based on local features, in part because these are most readily mapped onto early stages of sensory systems [64,65,66], problems of scene analysis can also be approached with generative models more rooted in how sounds are produced. For instance, speech and instrument sounds are fruitfully characterized as the product of a source and a filter that each vary over time in particular ways [67,68], as are sounds in reverberant environments [69], and humans appear to have implicit knowledge of this generative structure [70]. Reconciling these generative models for sound with those rooted in neurally plausible local feature decompositions is a critical topic for future research.…”
Section: Open Issues and Future Directionsmentioning
confidence: 99%
“…200 Hz, where listeners are equally sensitive to ITDs conveyed during rising and peak energy. 98 Acoustically, reverberant energy can be 20 dB less intense at 200 Hz than at 600 Hz in many 99 outdoor settings (Traer & McDermott, 2016). For both frequencies, listeners are least sensitive 100…”
mentioning
confidence: 99%
“…Frequency-dependent emphasis of early-arriving sound reflects natural 688 frequency-profiles in reverberant energy 689 Natural outdoor acoustics have seemingly influenced brain mechanisms that suppress 690 responses to reverberation. In many outdoor environments, including forests, fields, and 691 streets, reverberation-time ('T60'-the time for reverberant energy to fall by 60 decibels) 692 decreases as sound-frequency decreases below 1500 Hz (Traer & McDermott, 2016). 693…”
mentioning
confidence: 99%