2023
DOI: 10.1111/cogs.13312
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Structure‐Building in the Brain With CCG Parsing and Large Language Models

Abstract: To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad‐coverage tools from natural‐language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context‐free grammars (CFGs), yet such formalisms are not sufficiently expressive for human languages. Combinatory categorial grammars (CCGs) are sufficiently expressive directly compositional models of grammar with flexible c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
7
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 101 publications
(229 reference statements)
1
7
0
Order By: Relevance
“…This has to do with the fact that we computed node count based on derived tree structures rather than on the actual derivation trees. In order to build neuro-computational language models that are closer to cognitive and neurobiological processing, it is important that future work assumes a more transparent relation between grammar and parser, for instance by using the derivation steps directly implemented by the grammar (Brennan et al, 2020; Chesi, 2015; Hale et al, 2018; Stanojević et al, 2023). Knowing when the brain projects this linguistic knowledge onto its perceptual analysis of speech is essential in order to understand how the brain transforms continuous sensory stimulation into cognitive representations, still a major question in human biology.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…This has to do with the fact that we computed node count based on derived tree structures rather than on the actual derivation trees. In order to build neuro-computational language models that are closer to cognitive and neurobiological processing, it is important that future work assumes a more transparent relation between grammar and parser, for instance by using the derivation steps directly implemented by the grammar (Brennan et al, 2020; Chesi, 2015; Hale et al, 2018; Stanojević et al, 2023). Knowing when the brain projects this linguistic knowledge onto its perceptual analysis of speech is essential in order to understand how the brain transforms continuous sensory stimulation into cognitive representations, still a major question in human biology.…”
Section: Discussionmentioning
confidence: 99%
“…It is promising that the results of recent neuroimaging studies point in this direction. For instance, grammars that compute hierarchical structure account for variance in neural activity above and beyond the variance accounted for by sequence-based models (Brennan et al, 2012, 2020; Brennan & Hale, 2019; Lopopolo et al, 2021; Martin & Doumas, 2017; Shain et al, 2020), and hierarchical grammars that naturally represent long-distance dependencies, which are ubiquitous in natural languages, uniquely predict activity in brain areas commonly linked to syntactic structure building (Brennan et al, 2016; Li & Hale, 2019; Nelson et al, 2017; Stanojević et al, 2023). These findings reinforce the view that grammars that are well-equipped to account for natural language structures (competence) are also required to adequately model the activity of the brain when it incrementally computes these structures (performance).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the end, we arrive at a potential account in which the brain tracks regularities stemming from syntactic structure and from sequential lexical information. Given that language processing is situated in a biological organ sensitive to multiple levels of information 11,15,63 , neither structural nor sequential regularities are unique to the neural tracking of language. Instead, we suggest rigorous dissection 9 of, for example, auditory 14 , lexical 13 , and task-modulated aspects 62 of this frequency-tagged readout.…”
Section: Discussionmentioning
confidence: 99%
“…For example, Frank et al (2015) demonstrated that sequential models like recurrent neural networks (RNNs) successfully predict human electroencephalography (EEG) relative to context-free grammars (CFGs), suggesting that human language processing is insensitive to hierarchical syntactic structures. In contrast, the positive results of hierarchical models like CFGs and more expressive grammar formalisms like minimalist grammars and combinatory categorial grammars have also been confirmed against human EEG ( Brennan & Hale, 2019 ) as well as functional magnetic resonance imaging (fMRI) ( Brennan et al, 2016 ; Stanojević et al, 2023 ).…”
Section: Introductionmentioning
confidence: 99%