2018
DOI: 10.1101/364042
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Distracting Linguistic Information Impairs Neural Tracking of Attended Speech

Abstract: Listening to speech is difficult in noisy environments, and is even harder when the interfering noise consists of intelligible speech as compared to non-intelligible sounds. This suggests that the ignored speech is not fully ignored, and that competing linguistic information interferes with the neural processing of target speech. We tested this hypothesis using magnetoencephalography (MEG) while participants listened to target clear speech in the presence of distracting noisevocoded signals. Crucially, the noi… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 68 publications
(150 reference statements)
1
8
0
Order By: Relevance
“…A recent study reanalyzing the data of (Millman et al, 2015) has found that delta tracking of speech is increased when the NV speech is intelligible as compared to when it is not understood (Di Liberto, Lalor, & Millman, 2018). In noisy environments, neural tracking of the attended speech signal is stronger when the attended speech is fully understood (Dai et al, 2022; Keitel, Gross, & Kayser, 2018), or when the attended speech is in competition with unstructured speech (words were presented in random order) as compared to structured speech (speech with phrasal structure) (Har-Shai Yahav & Zion-Golumbic, 2021). The language proficiency of the listener also affects the neural tracking of naturally spoken speech (Lizarazu, Carreiras, Bourguignon, Zarraga, & Molinaro, 2021).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…A recent study reanalyzing the data of (Millman et al, 2015) has found that delta tracking of speech is increased when the NV speech is intelligible as compared to when it is not understood (Di Liberto, Lalor, & Millman, 2018). In noisy environments, neural tracking of the attended speech signal is stronger when the attended speech is fully understood (Dai et al, 2022; Keitel, Gross, & Kayser, 2018), or when the attended speech is in competition with unstructured speech (words were presented in random order) as compared to structured speech (speech with phrasal structure) (Har-Shai Yahav & Zion-Golumbic, 2021). The language proficiency of the listener also affects the neural tracking of naturally spoken speech (Lizarazu, Carreiras, Bourguignon, Zarraga, & Molinaro, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…We used the same NV speech stimuli as in previous behavioral and MEG studies (Dai, McQueen, Hagoort, & Kösem, 2017; Dai, McQueen, Terporten, Hagoort, & Kösem, 2022). The original speech were selected from a corpus with daily conversational Dutch sentences, digitized at a 44,100 Hz sampling rate and recorded either by a native male or a native female speaker (Versfeld, Daalder, Festen, & Houtgast, 2000).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…And yet, these modulatory top-down effects do not eliminate the internal representation of task-irrelevant (or ‘to-be-ignored’) speech. Not only is this speech still encoded in auditory cortex (Ding and Simon 2012a; Horton and Srinivasan 2013; Zion Golumbic et al 2013; O’Sullivan et al 2015; Fiedler et al 2019), but a multitude of behavioural and neural findings provide evidence that some linguistic processing is also applied to task-irrelevant speech (Tun et al 2002; Dupoux et al 2003; Beaman et al 2007; Rivenez et al 2008; Carey et al 2014; Parmentier et al 2014, 2018; Schepman et al 2015; Vachon et al 2019; Dai et al 2021; Har-shai Yahav and Zion Golumbic, 2021). Therefore, alongside the convincing evidence that selective-attention biases neural processing such that to-be-attended speech is preferentially encoded, we still do not have a full understanding of the system’s capacity and limitations for processing additional concurrent speech.…”
Section: Introductionmentioning
confidence: 99%
“…Research into this question has largely focused on selective attention paradigms - where participants are instructed to listen to one ‘main’ stimulus and perform a task, while disregarding other task-irrelevant stimuli (Cherry, 1953; Broadbent, 1958; Lane and Pearson, 1982; Driver, 2001; Ding et al, 2018). However, these selective attention paradigms have yielded mixed results regarding the depth of processing that is applied to task-irrelevant speech, with some studies suggesting that task-irrelevant speech is only represented at an acoustic level but not at a semantic/linguistic level, while others do find evidence for some linguistic processing of task-irrelevant speech (Dupoux et al, 2003; Brodbeck et al, 2020; Dai et al, 2021; Har-shai Yahav and Zion Golumbic, 2021), particularly if it contains salient content words (Moray, 1959; Treisman, 1960; Wood and Cowan, 1995; Rivenez et al, 2008) Perhaps one of the most well-known demonstrations of this phenomena is the conscious detection of one’s own name in supposedly “unattended” speech (Moray, 1959; Wood and Cowan, 1995; Conway et al, 2001; Tamura et al, 2012; Tateuchi et al, 2012; Röer et al, 2013; Naveh-Benjamin et al, 2014; Holtze et al, 2021). However, one major tension in interpreting these results is the ambiguity regarding what participants actually do in selective attention tasks.…”
Section: Introductionmentioning
confidence: 99%