2022
DOI: 10.1101/2022.02.08.479628
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Capacity and tradeoffs in neural encoding of concurrent speech during Selective and Distributed Attention

Abstract: Speech comprehension is severely compromised when several people talk at once, due to limited perceptual and cognitive resources. Under some circumstances listeners can employ top-down attention to prioritize the processing of task-relevant speech. However, whether the system can effectively represent more than one speech input remains highly debated. Here we studied how task-relevance affects the neural representation of concurrent speakers under two extreme conditions: when only one speaker was task-relevant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 79 publications
(151 reference statements)
2
3
0
Order By: Relevance
“…Admittedly, here participants knew in advance which target-words to expect, which likely made this task easier for them. However, these results are in line with other studies showing that people can distribute their attention among two speech stimuli and accurately report content from both, even without prior expectations (Shinn-Cunningham and Ihlefeld, 2004; Shafiro and Gygi, 2007; Ihlefeld and Shinn-Cunningham, 2008; Gygi and Shafiro, 2014; Lambez et al, 2020; Kaufman and Golumbic, 2022). Given that distributing attention does not seem to bear considerable costs to performance, it seems likely that listeners might employ a distributed listening strategy in selective attention paradigms as well and monitor task- irrelevant speech even when not instructed to.…”
Section: Discussionsupporting
confidence: 91%
See 3 more Smart Citations
“…Admittedly, here participants knew in advance which target-words to expect, which likely made this task easier for them. However, these results are in line with other studies showing that people can distribute their attention among two speech stimuli and accurately report content from both, even without prior expectations (Shinn-Cunningham and Ihlefeld, 2004; Shafiro and Gygi, 2007; Ihlefeld and Shinn-Cunningham, 2008; Gygi and Shafiro, 2014; Lambez et al, 2020; Kaufman and Golumbic, 2022). Given that distributing attention does not seem to bear considerable costs to performance, it seems likely that listeners might employ a distributed listening strategy in selective attention paradigms as well and monitor task- irrelevant speech even when not instructed to.…”
Section: Discussionsupporting
confidence: 91%
“…Moreover, adequate performance on this dual-task was not correlated with individual working-memory capacity (WMC), further supporting its relative low-cognitive-demand nature (Conway et al, 2001; Colflesh and Conway, 2007; Gygi and Shafiro, 2012; Naveh-Benjamin et al, 2014). At the same time, the neural speech tracking analysis shows that the Narrative Stream was represented more robustly than the Barista Stream, a pattern reminiscent of the enhanced speech-tracking of task-relevant speech in selective attention studies (Kerlin et al, 2010; Ding and Simon, 2012b, 2012a; Mesgarani and Chang, 2012; Power et al, 2012; Zion Golumbic et al, 2013; O’Sullivan et al, 2015; Fuglsang et al, 2017; Fiedler et al, 2019; Har-shai Yahav and Zion Golumbic, 2021; Kaufman and Golumbic, 2022). In discussing this result, we acknowledge the possibility that the lack of a reliable speech-tracking response to the Barista Stream may be due, at least in part, to the highly structured nature of this stimulus which contains substantial autocorrelation.…”
Section: Discussionmentioning
confidence: 92%
See 2 more Smart Citations
“…Bearing these limitations in mind, the converging EEG and GSR results, showing responses for both the own name and semantic violation manipulations, provide strong indications the task-irrelevant barista-speech was monitored by the listeners, and that they were able to glean salient semantic information from it. This converges with the results of several recent studies, showing that listeners are actually capable of following the content of two simultaneous speakers, at least in contexts with moderate acoustic load (Agmon et al, 2021; Har-shai Yahav and Zion Golumbic, 2021; Kaufman and Zion Golumbic, 2022; Pinto et al, 2022).…”
Section: Discussionsupporting
confidence: 91%