Objective-A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition.Design-Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming.Results-The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults.Since the publication of Oldfield's (1966) seminal article, "Things, Words and the Brain," a great deal of attention has been devoted to the structural organization of words in the mental lexicon. Most of this research, however, has focused on the structure of higher level aspects of lexical representations, namely the semantic and conceptual organization of lexical items in memory (e.g., Miller & Johnson-Laird, 1976;Smith, 1978). As a consequence, little attention has been directed to the structural organization of the representations of sensory Copyright © 1998 and perceptual information used to gain access to these higher level sources of information. The goal of the present investigation was to explore in detail this structure and its implications for perception of spoken words by normal and hearing-impaired listeners.In the present set of studies, structure will be defined specifically in terms of similarity relations among the sound patterns of words. Similarity will serve as th...
Current theories of spoken-word recognition posit two levels of representation and process: lexical and sublexical. By manipulating probabilistic phonotactics and similarity-neighborhood density, we attempted to determine if these two levels of representation have dissociable effects on processing. Whereas probabilistic phonotactics have been associated with facilitatory effects on recognition, increases in similarity-neighborhood density typically result in inhibitory effects on recognition arising from lexical competition. Our results demonstrated that when the lexical level is invoked using real words, competitive effects of neighborhood density are observed. However, when strong lexical effects are removed by the use of nonsense word stimuli, facilitatory effects of phonotactics emerge. These results are consistent with a two-level framework of process and representation embodied in certain current models of spoken-word recognition.
Phonotactic probability refers to the frequency with which phonological segments and sequences of phonological segments occur in words in a given language. We describe one method of estimating phonotactic probabilities based on words in American English. These estimates of phonotactic probability have been used in a number of previous studies and are now being made available to other researchers via a Web-based interface. Instructions for using the interface, as well as details regarding how the measures were derived, are provided in the present article. The Phonotactic Probability Calculator can be accessed at http://www.people.ku.edu/~mvitevit/PhonoProbHome.html.Crystal (1992, p. 301) defined phonotactics as "The sequential arrangements of phonological units that are possible in a language. In English, for example, initial /spr-/ is a possible phonotactic sequence, whereas /spm-/ is not." Although phonotactics has traditionally been thought of in dichotomous terms (legal vs. illegal), the sounds in the legal category of a language do not all occur with equal probability. For example, the segments /s/ and /j/ are both legal as word-initial consonants in English, but /s/ occurs word initially more often than /j/. Similarly, the word-initial sequence of segments /s^/ is more common in English than the wordinitial sequence /ji/. The term phonotactic probability has been used to refer to the frequency with which legal phonological segments and sequences of segments occur in a given language (Jusczyk, Luce, & Charles-Luce, 1994).A comprehensive review of the studies that have demonstrated influences of phonotactic probability on the processing of spoken words is beyond the scope of this article, but it is worth noting a few examples to illustrate the breadth of processes that rely on this probabilistic information. For example, Jusczyk, Friederici, Wessels, Svenkerud, and Jusczyk (1993) found that sensitivity to phonotactic information occurs very early in life. They found that by 9 months of age, infants were able to discriminate among the sounds that were and were not part of their native language. Jusczyk et al. (1994) further demonstrated that infants of the same age could discriminate between nonwords that contained sounds that were more common or less common within their native language. Adults are also sensitive to the probability with which sounds occur in their native language. Ratings of the word-likeness of specially constructed nonwords by adults were influenced by phonotactic probability such that nonwords comprised of highprobability segments and sequences of segments were rated as being more like words in English than nonwords comprised of low-probability segments and sequences of segments (Vitevitch, Luce, Charles-Luce, & Kemmerer, 1997;Vitevitch, Pisoni, Kirk, Hay-McCutcheon, & Yount, 2002; see also Eukel, 1980;Messer, 1967;Pertz & Bever, 1975 Phonotactic probability also appears to influence several on-line language processes. For example, phonotactic probability is one of several cues tha...
Two experiments employing an auditory priming paradigm were conducted to test predictions of the Neighborhood Activation Model of spoken word recognition (Luce & Pisoni, 1989, . Manuscript under review). Acoustic-phonetic similarity, neighborhood densities, and frequencies of prime and target words were manipulated. In Experiment 1, priming with low frequency, phonetically related spoken words inhibited target recognition, as predicted by the Neighborhood Activation Model. In Experiment 2, the same prime-target pairs were presented with a longer inter-stimulus interval and the effects of priming were eliminated. In both experiments, predictions derived from the Neighborhood Activation Model regarding the effects of neighborhood density and word frequency were supported. The results are discussed in terms of competing activation of lexical neighbors and the dissociation of activation and frequency in spoken word recognition.
. (2005). Examining the time course of indexical specificity effects in spoken word recognition. Journal of Experimental Psychology. Learning, Memory, and Cognition, 31, 2,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.