2022
DOI: 10.1101/2022.10.04.509899
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Diverging Neural Dynamics for Syntactic Structure Building in Naturalistic Speaking and Listening

Abstract: The neural correlates of sentence production have been mostly studied with constraining task paradigms that introduce artificial task effects. In this study, we aimed to gain a better understanding of syntactic processing in spontaneous production vs. naturalistic comprehension. We extracted word-by-word metrics of phrase-structure building with top-down and bottom-up parsers that make different hypotheses about the timing of structure building. In comprehension, structure building proceeded in an integratory … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 80 publications
0
4
0
Order By: Relevance
“…From these parses we extracted node count estimates to function as the syntactic features in our TRF models. Node counts have been found to effectively represent syntactic complexity in the neural signal (Brennan et al, 2016;Giglio et al, 2022;Li & Hale, 2019;Nelson, El Karoui, et al, 2017). Node counts can be computed in different ways, depending on the algorithm the parser is hypothesized to use to reach the structured representation.…”
Section: Experimental Featuresmentioning
confidence: 99%
“…From these parses we extracted node count estimates to function as the syntactic features in our TRF models. Node counts have been found to effectively represent syntactic complexity in the neural signal (Brennan et al, 2016;Giglio et al, 2022;Li & Hale, 2019;Nelson, El Karoui, et al, 2017). Node counts can be computed in different ways, depending on the algorithm the parser is hypothesized to use to reach the structured representation.…”
Section: Experimental Featuresmentioning
confidence: 99%
“…[52] and Giglio et al . [133]. Finally, it may be that for the method applied here, the fine-tuning of parsing strategy does not matter as much as the mere presence of brackets, indicating constituent structures, as seen in Coopmans et al .…”
Section: Methodsmentioning
confidence: 84%
“…When words or phrases need to be grouped into a larger syntactic unit this feature is incremented. It is also referred to as bottom-up count of syntactic structures [23, 52, 133]. This is in contrast to the top-down count, which enumerates opening nodes.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…So far, we have discussed the regions that were associated with the left-corner RNNG, but we have not discussed how surprisal or distance computed from the left-corner RNNG modulates in the brain. In previous studies, it has been unclear which brain region is responsible for which component of computational models since the role of the syntactic processing for each study has been observed using different grammars with different complexity metrics: for example, surprisal estimated from part-of-speech ( Lopopolo et al, 2017 ); surprisal computed from CFGs ( Henderson et al, 2016 ); node count from the structures generated by CFGs ( Brennan et al, 2012 ; Brennan & Pylkkänen, 2017 ; Giglio et al, 2022 ; Lopopolo et al, 2021 ); node count from the structures generated by combinatory categorial grammars ( Stanojević et al, 2021 , 2023 ); node count from the structures generated by minimalist grammars ( Brennan et al, 2016 ; Li & Hale, 2019 ); surprisal and distance computed from top-down RNNGs ( Brennan et al, 2020 ). It might be a case where surprisal and the metrics that express the process of the steps (e.g., node count, distance) play roles in designated regions of the brain separately.…”
Section: Discussionmentioning
confidence: 99%