2019
DOI: 10.1101/717512
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

fMRI reveals language-specific predictive coding during naturalistic sentence comprehension

Abstract: Much research in cognitive neuroscience supports prediction as a canonical computation of cognition in many domains. Is such predictive coding implemented by feedback from higher-order domain-general circuits, or is it locally implemented in domain-specific circuits? What information sources are used to generate these predictions? This study addresses these two questions in the context of language processing. We present fMRI evidence from a naturalistic comprehension paradigm (1) that predictive coding in the … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
51
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(58 citation statements)
references
References 255 publications
6
51
1
Order By: Relevance
“…5-gram surprisal quantifies the predictability of words as the negative log probability of a word given the four words preceding it in context. PCFG and 5-gram surprisal were investigated by Shain, Blank, et al (2020) because their interpretable structure permitted testing of hypotheses of interest in that study. However, their strength as language models has now been outstripped by less interpretable but better performing incremental language models based on deep neural networks (Jozefowicz et al, 2016;Gulordava et al, 2018;Radford et al, 2019).…”
Section: Control Predictorsmentioning
confidence: 99%
See 1 more Smart Citation
“…5-gram surprisal quantifies the predictability of words as the negative log probability of a word given the four words preceding it in context. PCFG and 5-gram surprisal were investigated by Shain, Blank, et al (2020) because their interpretable structure permitted testing of hypotheses of interest in that study. However, their strength as language models has now been outstripped by less interpretable but better performing incremental language models based on deep neural networks (Jozefowicz et al, 2016;Gulordava et al, 2018;Radford et al, 2019).…”
Section: Control Predictorsmentioning
confidence: 99%
“…Following Shain, Blank, et al (2020), we use continuous-time deconvolutional regression (CDR; Shain & Schuler, 2019 to infer the shape of the hemodynamic response function (HRF) from data (Boynton et al, 1996;Handwerker et al, 2004). We assumed the following two-parameter HRF kernel based on the widely-used double-gamma canonical HRF (Lindquist et al where parameters α, β are fitted using black box variational Bayesian (BBVI) inference.…”
Section: Model Designmentioning
confidence: 99%
“…3). Ear-550 lier model-based papers generally either used only 551 feature-unspecific word unexpectedness [12, 14, 34, 552 68, 69] or quantified predictions at a single level such 553as syntax[13, 24, 50,70], phonemes[15,29,30] or 554 semantics [26]. In all cases, predictions at a specific 555 level were based on a separate model that had to be 556 independently trained and often only incorporated 557 linguistic context in a limited way.…”
mentioning
confidence: 99%
“…This limited our understanding of the neural processes of language comprehension to small typological domains. To complement monolingual datasets such as the Narrative Brain Dataset (NBD) 14 , the Alice Dataset 15 and the Mother of Unification Studies 16 , we collected a multilingual fMRI dataset consisted of Antoine de Saint-Exupéry's The Little Prince in English, Chinese and French. A total of 112 subjects (49 English speakers, 35 Chinese speakers and 28 French speakers) listened to the whole audiobook for about 100 minutes in the scanner (see Table 1 and Table 2 for the demographics of the participants, data collection procedures, and stimuli information for the English, Chinese, and French datasets).…”
Section: Background and Summarymentioning
confidence: 99%