Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021
DOI: 10.1111/cogs.13020
|View full text |Cite
|
Sign up to set email alerts
|

Parsing as a Cue‐Based Retrieval Model

Abstract: This paper develops a novel psycholinguistic parser and tests it against experimental and corpus reading data. The parser builds on the recent research into memory structures, which argues that memory retrieval is content-addressable and cue-based. It is shown that the theory of cue-based memory systems can be combined with transition-based parsing to produce a parser that, when combined with the cognitive architecture ACT-R, can model reading and predict online behavioral measures (reading times and regressio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 112 publications
(255 reference statements)
0
9
0
Order By: Relevance
“…These exploratory results have several implications. First, they support the existence of syntacticallyrelated WM load in the language network during naturalistic sentence comprehension and indicate that the DLT captures this WM signature better than more recent algorithmic-level models of WM in language (Lewis & Vasishth, 2005;Rasmussen & Schuler, 2018;Dotlačil, 2021). Second, they present a serious challenge to the hypothesis that the MD networkthe most likely domain-general WM resourceis recruited for language-related WM operations: despite casting a broad net over theoretically motivated WM measures and performing the testing in-sample, the MD network does not show systematic correlates of WM demand.…”
Section: Resultsmentioning
confidence: 69%
See 3 more Smart Citations
“…These exploratory results have several implications. First, they support the existence of syntacticallyrelated WM load in the language network during naturalistic sentence comprehension and indicate that the DLT captures this WM signature better than more recent algorithmic-level models of WM in language (Lewis & Vasishth, 2005;Rasmussen & Schuler, 2018;Dotlačil, 2021). Second, they present a serious challenge to the hypothesis that the MD networkthe most likely domain-general WM resourceis recruited for language-related WM operations: despite casting a broad net over theoretically motivated WM measures and performing the testing in-sample, the MD network does not show systematic correlates of WM demand.…”
Section: Resultsmentioning
confidence: 69%
“…At a high level, we analyze the influence of theory-driven measures of working memory load during auditory comprehension of naturalistic stories (Futrell et al, 2020) on activation levels in the language-selective (LANG) vs. domain-general multiple-demand (MD) networks identified in each participant using an independent functional localizer. To control for regional and participant-level variation in the hemodynamic response function (HRF), the HRF is estimated from data using continuous-time deconvolutional regression (Shain & Schuler, 2018, 2021, rather than assumed (cf. e.g., Bhattasali et al, 2019;Brennan et al, 2016).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…These models focus on different aspects of sentence processing, and have been evaluated against corpus data, such as the Schilling corpus (Schilling et al, 1998). Two models that investigate the interaction between eye-movement control and sentence comprehension using data from planned experiments are reported in Vasishth and Engelmann (2022) and Dotlačil (2021); both these investigations use a highly simplified version of E-Z Reader, that is, the Eye Movements and Movement of Attention (EMMA) model embedded within the ACT-R architecture (Salvucci, 2001). The simplified EMMA model has important limitations; for example, as discussed in Engelmann et al (2013), the model only allows regressive eye movements to the preceding word.…”
Section: Introductionmentioning
confidence: 99%