We present a self-organizing approach to sentence processing that sheds new light on notional plurality effects in agreement attraction, using pseudopartitive subject noun phrases (e.g., a bottle of pills). We first show that notional plurality ratings (numerosity judgments for subject noun phrases) predict verb agreement choices in pseudopartitives, in line with the "Marking" component of the Marking and Morphing theory of agreement processing. However, no account to date has derived notional plurality values from independently needed principles of language processing. We argue on the basis of new experimental evidence and a dynamical systems model that the theoretical black box of notional plurality can be unpacked into objectively measurable semantic features. With these semantic features driving structure formation (and hence agreement choice), our model reproduces the human verb production patterns as a byproduct of normal processing. Finally, we discuss how the self-organizing approach might be extended to other agreement attraction phenomena.
Among theories of human language comprehension, cue‐based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long‐distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, handpicked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but handpicking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use well‐established word embedding methods for creating distributed lexical feature representations that encode information relevant for retrieval using distributed retrieval cue vectors. We show that the similarity between the feature and cue vectors (a measure of plausibility) predicts total reading times in Cunnings and Sturt’s eye‐tracking data. The features can easily be plugged into existing parsing models (including cue‐based retrieval and self‐organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons.
Cue-based retrieval theories of sentence processing assume that syntactic dependencies are resolved through a content-addressable search process. An important recent claim is that in certain dependency types, the retrieval cues are weighted such that one cue dominates. This cue-weighting proposal aims to explain the observed average behavior, but here we show that there is systematic individual-level variation in cue weighting. Using the Lewis and Vasishth cue-based retrieval model, we estimated individual-level parameters for processing speed and cue weighting using 13 published datasets; hierarchical Approximate Bayesian Computation (ABC) was used to estimate the parameters. The modeling reveals a nuanced picture of cue weighting: we find support for the idea that some participants weight cues differentially, but not all participants do. Only fast readers tend to have the higher weighting for structural cues, suggesting that reading proficiency might be associated with cue weighting. A broader achievement of the work is to demonstrate how individual differences can be investigated in computational models of sentence processing without compromising the complexity of the model.
Studies on similarity-based interference in subject-verb agreement dependencies have found a consistent facilitatory effect in ungrammatical sentences but no conclusive effect in grammatical sentences. Existing models propose that interference is caused either by a faulty representation of the input (encoding-based models) or by difficulty in retrieving the subject based on cues at the verb (retrieval-based models). Neither class of model captures the observed patterns in human reading time data. We propose a new model that integrates a feature encoding mechanism into an existing cue-based retrieval model. Our model outperforms the cue-based retrieval model in explaining interference effect data from both grammatical and ungrammatical sentences. We argue that our integrated encoding and retrieval model can provide a basis for experimental and modeling work on understanding interference effects in sentence comprehension.
Two-sided group digraphs and graphs, introduced by Iradmusa and Praeger, provide a generalization of Cayley digraphs and graphs in which arcs are determined by left and right multiplying by elements of two subsets of the group. We characterize when twosided group digraphs and graphs are weakly and strongly connected and count connected components, using both an explicit elementary perspective and group actions. Our results and examples address four open problems posed by Iradmusa and Praeger that concern connectedness and valency. We pose five new open problems.
Studies of the speed-accuracy trade-off (SAT) have been influential in arguing for the direct-access model of retrieval in sentence processing. The direct-access model assumes that long-distance dependencies (like subject-verb agreement) rely on a content-addressable search for the correct representation in memory. In this paper, we address two important weaknesses in the statistical methods standardly used for analysing SAT data. First, these methods are based on non-hierarchical modelling. We show how a hierarchical model can be fit to SAT data, and we test parameter recovery in this more conservative model. The parameters most relevant to the direct-access account cannot be accurately estimated. This may be due to the nature of SAT data or the standard SAT model. Second, the power properties of SAT studies are unknown. We conduct a power analysis and show that inferences from null results to the null hypothesis, though commonplace in the SAT literature, are likely unwarranted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.