2014
DOI: 10.1121/1.4878011
|View full text |Cite
|
Sign up to set email alerts
|

Auditory attention in a dynamic scene: Behavioral and electrophysiological correlates

Abstract: The ability to direct and redirect selective auditory attention varies substantially across individuals with normal hearing thresholds, even when sounds are clearly audible. We hypothesized that these differences can come from both differences in the spectrotemporal fidelity of subcortical sound representations and in the efficacy of cortical attentional networks that modulate neural representations of the auditory scene. Here, subjects were presented with an initial stream from straight ahead and a second str… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Previous studies showed that such ERP components are strongly modulated by selective attention but only when auditory objects are successfully segregated (Choi et al, 2013;Choi et al, 2014;Kong et al, 2015). Since those early cortical ERP components originate from multiple regions across Heschl's gyrus (i.e., the primary auditory cortex) and its surrounding areas (e.g., posterior superior temporal gyrus) ( C eponien et al, 1998), an efficient and collective way of indexing the neural efficiency in speech unmasking is using a scalp electroencephalographical (EEG) potential at the vertex [e.g., "Cz" of the international 10-10 system for EEG electrode montage: Koessler et al (2009) within a limited time-window (e.g., 100-300 ms range after the stimulus onset].…”
Section: Assessing Individual Differences In Speech Unmasking Indivimentioning
confidence: 99%
See 2 more Smart Citations
“…Previous studies showed that such ERP components are strongly modulated by selective attention but only when auditory objects are successfully segregated (Choi et al, 2013;Choi et al, 2014;Kong et al, 2015). Since those early cortical ERP components originate from multiple regions across Heschl's gyrus (i.e., the primary auditory cortex) and its surrounding areas (e.g., posterior superior temporal gyrus) ( C eponien et al, 1998), an efficient and collective way of indexing the neural efficiency in speech unmasking is using a scalp electroencephalographical (EEG) potential at the vertex [e.g., "Cz" of the international 10-10 system for EEG electrode montage: Koessler et al (2009) within a limited time-window (e.g., 100-300 ms range after the stimulus onset].…”
Section: Assessing Individual Differences In Speech Unmasking Indivimentioning
confidence: 99%
“…Auditory scene analysis relies on the supra-threshold acoustic features that provide binding cues for auditory grouping (Darwin, 1997). These include the spectra (Lee et al, 2013), location (Frey et al, 2014;Goldberg et al, 2014), temporal coherence (Moore, 1990;Shamma et al, 2013;Teki et al, 2011), rhythm (Calderone et al, 2014;Golumbic et al, 2013;Herrmann et al, 2016;Obleser and Kayser, 2019), and timing (Lange, 2009) of the figure and ground. The fidelity of encoding such supra-threshold acoustic features may affect the separation of target speech from background noise.…”
Section: Assessing Individual Differences In Speech Unmasking Indivimentioning
confidence: 99%
See 1 more Smart Citation
“…Auditory scene analysis relies on the supra-threshold acoustic features that provide binding cues for auditory grouping ( Darwin, 1997 ). These include the spectra ( Lee et al, 2013 ), location ( Frey et al, 2014 ; Goldberg et al, 2014 ), temporal coherence ( Moore, 1990 ; Shamma et al, 2013 ; Teki et al, 2011 ), rhythm ( Calderone et al, 2014 ; Golumbic et al, 2013 ; Herrmann et al, 2016 ; Obleser and Kayser, 2019 ), and timing ( Lange, 2009 ) of the figure and ground. The fidelity of encoding such supra-threshold acoustic features may affect the separation of target speech from background noise.…”
Section: Introductionmentioning
confidence: 99%