2017
DOI: 10.1111/nyas.13317
|View full text |Cite
|
Sign up to set email alerts
|

Recent advances in exploring the neural underpinnings of auditory scene perception

Abstract: Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
26
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(27 citation statements)
references
References 153 publications
(273 reference statements)
1
26
0
Order By: Relevance
“…On the other hand, the PS model may play a greater role in the perceptual segregation of simultaneous sounds based on inharmonicity (Moore et al, 1986;Hartmann et al, 1990;Alain et al, 2002;Micheyl et al, 2013a) and the increased, though comparatively modest, tendency to hear simultaneous tones as separate streams as the frequency separation between them increases (Micheyl et al, 2013b). The present findings, therefore, are broadly supportive of the view that auditory scene analysis involves multiple cues and mechanisms, which may be weighted differently depending upon acoustic and behavioral context (Bregman, 1990;Christison-Lagay et al, 2015;Lu et al, 2017;Snyder and Elhilali, 2017).…”
Section: Discussionsupporting
confidence: 84%
“…On the other hand, the PS model may play a greater role in the perceptual segregation of simultaneous sounds based on inharmonicity (Moore et al, 1986;Hartmann et al, 1990;Alain et al, 2002;Micheyl et al, 2013a) and the increased, though comparatively modest, tendency to hear simultaneous tones as separate streams as the frequency separation between them increases (Micheyl et al, 2013b). The present findings, therefore, are broadly supportive of the view that auditory scene analysis involves multiple cues and mechanisms, which may be weighted differently depending upon acoustic and behavioral context (Bregman, 1990;Christison-Lagay et al, 2015;Lu et al, 2017;Snyder and Elhilali, 2017).…”
Section: Discussionsupporting
confidence: 84%
“…1E) or as two segregated streams ("percept 2" in Fig. 1E); see recent reviews [30,44]. There are commonalities between these visual and auditory paradigms, in percept 1 (Fig.…”
Section: Oscillatory Models Of Perceptual Bistabilitymentioning
confidence: 79%
“…It is carried out based on the conjunction of multiple stimulus-features including pitch, timbre and temporal structure, with spatial cues such as ITD and ILD contributing as well albeit to a lesser degree (David et al, 2017; Snyder and Elhilali, 2017; Stainsby et al, 2011). Previous studies have shown that segregation of two competing speakers is relatively good, but that performance drops sharply as the number of concurrent speakers increase beyond two (Brungart et al, 2001; Humes et al, 2017; Rosen et al, 2013; Simpson and Cooke, 2005).…”
Section: Discussionmentioning
confidence: 99%
“…Several neuroimaging studies have pointed to a distinction between brain regions in the auditory system that utilize spatial information for the purpose of stream segregation and regions encoding the spatial location of an auditory source (Shiell et al, 2018; Smith et al, 2010). Localization of sounds in space is computationally challenging, particularly given the effects of reverberation in natural settings (Keating and King, 2015; Traer and McDermott, 2016), and likely relies on different underlying mechanisms than stream segregation (Snyder and Elhilali, 2017). The dissociation between utilization of spatial cues for the purpose of stream segregation and determining their specific spatial location is also supported by behavioral findings demonstrating increased sensitivity and discriminability for spatially segregated sounds, albeit with poor ability to report the spatial location of the target (Klatt et al, 2018; Middlebrooks, 2013; Weller et al, 2016).…”
Section: Discussionmentioning
confidence: 99%