2024
DOI: 10.1162/nol_a_00118
|View full text |Cite
|
Sign up to set email alerts
|

Localizing Syntactic Composition with Left-Corner Recurrent Neural Network Grammars

Abstract: In computational neurolinguistics, it has been demonstrated that hierarchical models such as Recurrent Neural Network Grammars (RNNGs), which jointly generate word sequences and their syntactic structures via the syntactic composition, better explained human brain activity than sequential models such as Long Short-Term Memory networks (LSTMs). However, the vanilla RNNG has employed the top-down parsing strategy, which has been pointed out in the psycholinguistics literature as suboptimal especially for head-fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 65 publications
0
4
0
Order By: Relevance
“…We suggested in the introduction that these effects might be weaker in languages with head-final constructions, in particular if speakers of these languages adopt predictive parsing strategies. However, in seeming conflict with this possibility, a recent study in Japanese, a strictly head-final language, showed that a left-corner parsing model outperformed a top-down parsing model in left inferior frontal and temporal-parietal regions (Sugimoto et al, 2023). One relevant difference with our study is that they used a complexity metric that considers the number of possible syntactic analyses at each word (i.e., modeling ambiguity resolution), rather than directly quantifying the number of operations that are required to build the correct structure (i.e., node count for a one-path syntactic parse tree).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…We suggested in the introduction that these effects might be weaker in languages with head-final constructions, in particular if speakers of these languages adopt predictive parsing strategies. However, in seeming conflict with this possibility, a recent study in Japanese, a strictly head-final language, showed that a left-corner parsing model outperformed a top-down parsing model in left inferior frontal and temporal-parietal regions (Sugimoto et al, 2023). One relevant difference with our study is that they used a complexity metric that considers the number of possible syntactic analyses at each word (i.e., modeling ambiguity resolution), rather than directly quantifying the number of operations that are required to build the correct structure (i.e., node count for a one-path syntactic parse tree).…”
Section: Discussionmentioning
confidence: 99%
“…During language comprehension, however, syntactic processing is lexicalized (Coopmans et al, 2022; Hagoort, 2005), and lexical information guides predictive structure building (Arai & Keller, 2013; Boland & Blodgett, 2006; Schütze & Gibson, 1999). Such lexically driven structural predictions are represented to some extent in other metrics, such as surprisal values derived from probabilistic context-free grammars (Brennan & Hale, 2019; Brennan et al, 2016; Shain et al, 2020) or recurrent neural network grammars (Brennan et al, 2020; Hale et al, 2018; Sugimoto et al, 2023). Both types of grammars incrementally build hierarchical structure, which they use to conditionalize the probability of an upcoming word or an upcoming word’s part-of-speech.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations