2020
DOI: 10.1101/2020.11.24.396598
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cortical processing of reference in language revealed by computational models

Abstract: Our ability to ascertain which person a pronoun refers to is a central part of human language understanding. Toward a process-based understanding of the brain’s pronoun-resolution abilities, we evaluated four computational models against brain activity during naturalistic comprehension. These models each formalizes a different strand of explanation for pronoun resolution that has figured in the cognitive and linguistic literature. These include syntactic binding constraints, discourse coherence and principles … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 89 publications
0
4
0
Order By: Relevance
“…Second, many recent studies have shown that neural network language models (NNLMs), which embody (some elements of) predictive coding theory, are much more effective at explaining brain activity elicited by natural language than earlier methods ( Anderson et al, 2021 ; Antonello et al, 2021 ; Caucheteux et al, 2021a ; Caucheteux & King, 2022 ; Goldstein et al, 2021 ; Jain & Huth, 2018 ; Jat et al, 2019 ; LeBel, Jain, & Huth, 2021 ; Li et al, 2021 ; Schrimpf et al, 2021 ; Tikochinski et al, 2021 ; Toneva et al, 2020 ). Some of these studies claim that the superiority of NNLMs over other methods is evidence for predictive coding theory in language ( Goldstein et al, 2021 ; Schrimpf et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…Second, many recent studies have shown that neural network language models (NNLMs), which embody (some elements of) predictive coding theory, are much more effective at explaining brain activity elicited by natural language than earlier methods ( Anderson et al, 2021 ; Antonello et al, 2021 ; Caucheteux et al, 2021a ; Caucheteux & King, 2022 ; Goldstein et al, 2021 ; Jain & Huth, 2018 ; Jat et al, 2019 ; LeBel, Jain, & Huth, 2021 ; Li et al, 2021 ; Schrimpf et al, 2021 ; Tikochinski et al, 2021 ; Toneva et al, 2020 ). Some of these studies claim that the superiority of NNLMs over other methods is evidence for predictive coding theory in language ( Goldstein et al, 2021 ; Schrimpf et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…We stress the importance of considering multiple languages when building and testing neurobiological models of language processing, assuming that the neural substrates and processes of language are shared among speakers of all languages. As shown in previous work examining coreference resolution using the English and Chinese subset of this corpus, the computational model that best explains the neural signature for pronoun processing is generalizable for both English and Chinese 18 . These data can be reused to address different research questions with a variety of analytical methods.…”
Section: Background and Summarymentioning
confidence: 53%
“…Although GLM or encoding models have been commonly applied to fMRI data using long naturalistic stimuli like audiobooks 8,9,11,18,[38][39][40] . There are no standardised approaches for analysing complex and high dimensional naturalistic fMRI data.…”
Section: Analysis Bottleneckmentioning
confidence: 99%
“…Research on language cognition also draws on the operating mechanisms of language-computation models to propose hypotheses. A representative example of this type of work is that of Li et al [109]. They used a cognitive model and a neural network model to study the neural mechanism of the brain when understanding pronouns.…”
Section: Language Cognition Experiments Using Language Computation Me...mentioning
confidence: 99%