Parallel distributed processing (PDF) models represent a new and exciting approach to the study of visual word recognition in reading. Seidenberg and McClelland's (1989) model is examined because the strongest and widest claims for the viability of a connectionist account of visual word recognition have been made on the basis of their model. The current implemented version of their model fails to account for important facts concerning how human subjects read aloud and carry out lexical decisions, despite the fact that these tasks are central to the performance domain that the model purports to explain. The incorporation of multiple routines and an explicit lexical level of representation into the model may help resolve some of the difficulties. OverviewEvery field has its sacred cows, and visual word recognition is no exception. Two such sacred cows are the assumptions that (a) the mind contains several lexica of word forms and (b) there are a number of routines that make use of these word forms in various ways to read aloud, make lexical decisions, and access meaning. These assumptions are common to many otherwise quite different word-recognition models (e.g., see Besner & Mc-Cann, 1987;Besner & Johnston, 1989;Carr&Pollatsek, 1985;Norris, 1986). Moreover, these basic assumptions had gone unchallenged until recently. The parallel distributed processing (PDF) model developed by McClelland and his colleagues (e.g.,
For many models oflexical ambiguity resolution, relative frequency of the different meanings of homographs (words with more than one meaning) is crucial. Although several homograph association norms have been published in the past, none has involved a large number of subjects responding to a large number of homographs, and most homograph norming studies are now at least a decade old. In Experiment 1, associations to 566 homographs were collected from an average of 192 subjects per homograph. Frequency of occurrence for the three most common meanings is reported, along with the corresponding associates, and a measure of the overall ambiguity of each homograph. Homographs whose meanings differed in part of speech were more ambiguous overall than homographs whose different meanings belonged to a single grammatical class. Homographs whose pronunciation depended on meaning (heterophones) were no more ambiguous than nonheterophones, and word frequency was unrelated to overall ambiguity. Estimates of homograph balance across different norming studies were compared, and homographs with two meanings of approximately equal relative meaning frequency (balanced homographs) and homographs with one clearly dominant meaning (polarized homographs) were identified. In Experiment 2, reliability of meaning categorizations was measured for a subset of the homographs in the first experiment. Meaning categorizations were shown to be highly reliable across raters.Homographs are words that have more than one meaning but share the same orthography. They most often also share phonology (e.g., a dog's bark vs. a tree's bark; a fireplace poker vs. a poker game), but a few English homographs have distinct phonologies for their different meanings. For these heterophonic homographs, pronunciation depends on meaning; examples are "bass" (fish vs. guitar) and "wind" (gale vs. to coil). Contrary to intuition, homographs are not an obscure class of linguistic items. Rather, homographs could be considered important topics of study solely because of their abundance in English. Britton (1978) found that 44 % of a random sample of English words had more than one meaning, and that 85 % of a sample of high-frequency English words had more than one meaning. Several authors have argued that meaning indeterminacy in language and the environment in general is widespread and is one of the pervasive problems of human information processing (e.g
Lexical ambiguity research over the last two decades is reviewed, with a focus on how that literature applies to understanding the resolution of meaning for words. Early models of ambiguity processing dealt almost exclusively with the time course of the effects of context on lexical access, in order to address the issue of modularity of lexical access. Newer models of ambiguity processing accommodate recent findings of early context effects that are contingent on both strength of context and meaning frequency. The most important contribution of these newer models of ambiguity processing is not to the modularity debate, but to investigation of the range of parameters affecting the entire meaning resolution process, including meaning access as well as the integration of meanings into context. As an example of this approach, we describe a simple quantitative model of meaning resolution that subsumes many other models as parametric variations.
The authors explored the role of phonological representations in the integration of lexical information across saccadic eye movements. Study participants executed a saccade to a preview letter string that was presented extrafoveally. In Experiment 1, the preview string was replaced by a target string during the saccade, and the participants performed a lexical decision. Targets with phonologically regular initial trigrams benefited more from a preview than did targets with irregular initial trigrams. In Experiment 2, words with regularly pronounced initial trigrams were more likely to be correctly identified from the preview alone. In Experiment 3, participants were more likely to detect a change across a saccade from regular to irregular initial trigrams than from irregular to regular trigrams. The results suggest that phonological representations are activated from an extrafoveal preview and that this phonological information can be integrated with foveal information following a saccade. Models of visual word recognition traditionally have been concerned with the nature of the representations that mediate between perceptual information and lexical knowledge. For example, according to one type of model, encoding of the graphemic information present in the visual stimulus directly activates lexical representations without the need for phonological encoding (e.g., Seidenberg & McClelland, 1989). In other models, visual information necessarily activates a phonological representation prior to activating semantic representations (e.g.
We describe an activation-based model of word recognition and apply it to the process of resolving the meaning of homographs presented in context. The interpretation of homographs was assessed by asking participants to decide whether a target word was related to the meaning of a sentence containing a homograph. These relatedness decisions varied systematically with the relative frequency of the homograph meanings, delay, and the nature of the sentence context. In the model, it was assumed that orthographic and contextual information combine additively to determine the activation of word meanings, and that the probability of a "related" response is determined by the activation level of the related meaning. The model accurately accounts for all observed effects, as well as their interaction. We conclude that the core process of lexical ambiguity resolution may be quite simple.
Dixon (1986) proposed a location-confusion model to account for an interference effect that occurs when subjects decide whether a briefly presented target item appeared in a briefly presented array. In the model, it was assumed that information about the location of items decays quickly and that subjects sometimes have difficulty deciding whether a particular identity code corresponded to the target or the array. The present report describes two additional experiments using this paradigm. Experiment 1 confirmed the assumption that this interference occurs only with visual targets and not with auditory target items. Experiment 2 tested whether the interference occurs only at the level of identity codes for well-learned stimuli or whether it can also occur with arbitrary visual patterns. The stimuli in this experiment were pseudoletters that contained letter features, but had no simple, abstract identity codes. The results were consistent with the model, in that no interference effect was observed on overall accuracy; however, other aspects of the results suggested that an interference effect may have been masked by other trends in the data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.