Ambiguity resolution is a central problem in language comprehension. Lexical and syntactic ambiguities are standardly assumed to involve different types of knowledge representations and be resolved by different mechanisms. An alternative account is provided in which both types of ambiguity derive from aspects of lexical representation and are resolved by the same processing mechanisms. Reinterpreting syntactic ambiguity resolution as a form of lexical ambiguity resolution obviates the need for special parsing principles to account for syntactic interpretation preferences, reconciles a number of apparently conflicting results concerning the roles of lexical and contextual information in sentence processing, explains differences among ambiguities in terms of ease of resolution, and provides a more unified account of language comprehension than was previously available.
M. A. Just and P. A. Carpenter's (1992) capacity theory of comprehension posits a linguistic working memory functionally separated from the representation of linguistic knowledge. G. S. Waters and D. Caplan's (1996) critique of this approach retained the notion of a separate working memory. In this article, the authors present an alternative account motivated by a connectionist approach to language comprehension. In their view, processing capacity emerges from network architecture and experience and is not a primitive that can vary independently. Individual differences in comprehension do not stem from variations in a separate working memory capacity; instead they emerge from an interaction of biological factors and language experience. This alternative is argued to provide a superior account of comprehension results previously attributed to a separate working memory capacity.
Language production processes can provide insight into how language comprehension works and language typology—why languages tend to have certain characteristics more often than others. Drawing on work in memory retrieval, motor planning, and serial order in action planning, the Production-Distribution-Comprehension (PDC) account links work in the fields of language production, typology, and comprehension: (1) faced with substantial computational burdens of planning and producing utterances, language producers implicitly follow three biases in utterance planning that promote word order choices that reduce these burdens, thereby improving production fluency. (2) These choices, repeated over many utterances and individuals, shape the distributions of utterance forms in language. The claim that language form stems in large degree from producers' attempts to mitigate utterance planning difficulty is contrasted with alternative accounts in which form is driven by language use more broadly, language acquisition processes, or producers' attempts to create language forms that are easily understood by comprehenders. (3) Language perceivers implicitly learn the statistical regularities in their linguistic input, and they use this prior experience to guide comprehension of subsequent language. In particular, they learn to predict the sequential structure of linguistic signals, based on the statistics of previously-encountered input. Thus, key aspects of comprehension behavior are tied to lexico-syntactic statistics in the language, which in turn derive from utterance planning biases promoting production of comparatively easy utterance forms over more difficult ones. This approach contrasts with classic theories in which comprehension behaviors are attributed to innate design features of the language comprehension system and associated working memory. The PDC instead links basic features of comprehension to a different source: production processes that shape language form.
Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen (2002) pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a largescale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes. George Miller's (1956) landmark description of the nature of short term memory was a characterization of both its limits (7 ± 2 units) and the modulation of these limits through learning, in that the units were chunks, the size of which could change through a person's experience with the material being processed. In discussions of computational capacity since that time, different research paradigms have tended to vary in their attention to the claim of capacity limits vs. the claim that capacity changes through learning. For example, within adult sentence comprehension, many accounts have invoked capacity limits to explain people's difficulties in relative clause comprehension (e.g., Gibson, 1998;Just & Carpenter, 1992;Lewis, Vasishth & VanDyke, 2006). All of these accounts have noted that experience could affect processing abilities, but the focus in these accounts has been on showing how a characterization of capacity limits explains certain aspects of sentence comprehension
The relationship between print exposure and measures of reading skill was examined in college students (N=99, 58 female; mean age=20.3 years). Print exposure was measured with several new self-reports of reading and writing habits, as well as updated versions of the Author Recognition Test and the Magazine Recognition Test (Stanovich & West, 1989). Participants completed a sentence comprehension task with syntactically complex sentences, and reading times and comprehension accuracy were measured. An additional measure of reading skill was provided by participants’ scores on the verbal portions of the ACT, a standardized achievement test. Higher levels of print exposure were associated with higher sentence processing abilities and superior verbal ACT performance. The relative merits of different print exposure assessments are discussed
Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domainspecific mechanisms for serial ordering in verbal WM are provided by the language production architecture, in which positional, lexical, and phonological similarity constraints are highly similar to those identified in the WM literature. These behavioral similarities are paralleled in computational modeling of serial ordering in both fields. The role of long-term learning in serial ordering performance is emphasized, in contrast to some models of verbal WM. Classic WM findings are discussed in terms of the language production architecture. The integration of principles from both fields illuminates the maintenance and ordering mechanisms for verbal information.Keywords working memory; language production; phonological encoding; serial ordering; speech errors Nearly 30 years ago, Albert Ellis (1980) observed that errors on tests of verbal working memory (WM) paralleled those that occur naturally in speech production. Despite this significant observation, the majority of memory and language research since that time has focused on relations between verbal WM and language comprehension and acquisition rather than on the relation between verbal WM and language production (Baddeley, Eldridge, & Lewis, 1981;Caplan & Waters, 1999;Daneman & Carpenter, 1980, 1983Engle, Cantor, & Carullo, 1992;Gathercole & Baddeley, 1990;Just & Carpenter, 1992). The relative inattention to language production is striking given the production demands of typical verbal WM tasks: the maintenance and sequential output of verbal information. In this review, we examine the relation between verbal WM and language production processes in light of new behavioral and theoretical advances since Ellis's initial observations.Verbal WM refers to the temporary maintenance and manipulation of verbal information (Baddeley, 1986). In exploring the production-WM relation, our review emphasizes domain-specific (i.e., verbal) maintenance processes in WM rather than the domain-general, attentional processes that are hypothesized to oversee processing across different domains (e.g., verbal, visual;Baddeley, 1986;Cowan, 1995). Obviously, language production processes must be involved during output in spoken recall, but the hypothesis that we explore here is that the production system is crucial to maintenance as well. We suggest that the domain-specific mechanism underlying the maintenance of serial order in verbal WM is achieved by the language production architecture rather than by a system specifically dedicated to short-term maintenance.Language production planning naturally involves the maintenance and ordering of linguistic information. This information ranges over multiple levels, including messages (several different points that the speaker plans to make), words within phras...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.