Phonological priming effects were examined in an auditory single-word shadowing task. In 6 experiments, target items were preceded by auditorily or visually presented, phonologically similar, word or nonword primes. Results revealed facilitation in response time when a target was preceded by a word or nonword prime having the same initial phoneme when the prime was auditorily presented but not when it was visually presented. Second, modality-independent interference was observed when the phonological overlap between the prime and target increased from 1 to 3 phonemes for word primes but not for nonword primes. Taken together, these studies suggest that phonological information facilitates word recognition as a result of excitation at a prelexical level and increases response time as a result of competition at a lexical level. These processes are best characterized by connectionist models of word recognition.
The process of hypothesis testing entails both information selection (asking questions) and information use (drawing inferences from the answers to those questions). We demonstrate that although subjects may be sensitive to diagnosticity in choosing which questions to ask, they are insufficiently sensitive to the fact that different answers to the same question can have very different diagnosticities. This can lead subjects to overestimate or underestimate the information in the answers they receive. This phenomenon is demonstrated in two experiments using different kinds of inferences (category membership of individuals and composition of sampled populations). In combination with certain information-gathering tendencies, demonstrated in a third experiment, insensitivity to answer diagnosticity can contribute to a tendency toward preservation of the initial hypothesis. Results such as these illustrate the importance of viewing hypothesis-testing behavior as an interactive, multistage process that includes selecting questions, interpreting data, and drawing inferences.
Aphonological relationship between a prime and a target produces facilitation when one or two initial phonemes are shared (low-similarityfacilitation) but produces interference when more phonemes are shared (high-similarity interference; . Although low-similarity facilitation appears to be a strategic effect (Goldinger, Luce, Pisoni, & Marcario, 1992), this result cannot generalize to high-similarity interference because the two effects are dissociated . In the present study, strategic processing in high-similarityinterference was investigated. The phonological relatedness proportion (PRP) and the prime-target interstimulus interval (lSI) were varied in a shadowing experiment. Low-similarity facilitation was found only with a highPRPand long lSI, but high-similarityinterference was found regardless of PRP and lSI. These results suggest that strategies influence low-similarity facilitation, but high-similarityinterference reflects automatic processing.Several studies (e.g., Goldinger, Luce, Pisoni, & Marcario, 1992;Jakimik, Cole, & Rudnicky, 1985;Radeau, Morais, & Dewier, 1989; Siowiaczek & Hamburger, 1992) have examined the role ofphonology in auditory word recognition by using a priming paradigm (Meyer & Schvandeveldt, 1971) in which a target word is preceded by a prime that shares some of its initial phonemes. Two dissociable effects have been obtained in this area of research: low-similarity facilitation and high-similarity interference . Although low-similarity facilitation involves strategic processes (Goldinger et al., 1992), the influence of strategies in highsimilarity interference has not been investigated.Determining the role of strategic processes in phonological priming is critical for models of spoken word recognition that propose operations relying on the phonology and predict phonological priming under various circumstances. For instance, in cohort theory (Mars len-Wilson, 1987), word recognition begins by activating a cohort of possible lexical candidates whose initial phonemes match the incoming signal. As such, a phonologically related prime could preactivate a target and facilitate responses. Another theory, the neighborhood activation model (NAM; Luce, 1986), suggests that similar-sounding lexical entries compete during word recognition. That is, the probability ofrecognizing a word is a function of the number, word frequency, and phonetic similarity of the word's neighbors. Presenting a phonologically related prime effectively increases its frequency and, thus, increases the competition between it and the target. A connectionist model pro-
Cohort theory, developed by Marslen-Wilson and Welsh (1978), proposes that a "cohort" of all the words beginning with a particular sound sequence will be activated during the initial stage of the word recognition process. We used a priming technique to test specific predictions regarding cohort activation in three experiments. In each experiment, subjects identified target words embedded in noise at different signal-to-noise ratios. The target words were either presented in isolation or preceded by a prime item that shared phonological information with the target. In Experiment 1, primes and targets were English words that shared zero, one, two, three, or all phonemes from the beginning of the word. In Experiment 2, nonword primes preceded word targets and shared initial phonemes. In Experiment 3, word primes and word targets shared phonemes from the end of a word. Evidence of reliable phonological priming was observed in all three experiments. The results of the first two experiments support the assumption of activation of lexical candidates based on word-initial information, as proposed in cohort theory. However, the results of the third experiment, which showed increased probability of correctly identifying targets that shared phonemes from the end of words, did not support the predictions derived from the theory. The findings are discussed in terms of current models of auditory word recognition and recent approaches to spoken-language understanding.The perception and comprehension of spoken language involves a complex interaction among several different sources of linguistic information. To comprehend a sentence, a listener must analyze the phonetic, lexical, syntactic, semantic, and pragmatic information encoded in the speech waveform. Word perception is clearly a critical part of the comprehension process because words provide the interface between the perceptual processing of stimulus information and the conceptual interpretation of an utterance. In principle, it is possible to distinguish two functionally different processes that subserve word perception: word recognition and lexical access. Although there are no standard or widely agreed-upon definitions for these terms, we can define word recognition as the pattern recognition process that allows a listener to identify a spoken or printed stimulus as a word and lexical access as the process that mediates access to abstract knowledge (e.g., syntactic, semantic, pragmatic information) about a lexical entry (see Pisoni & Luce, in press). Note that making this theoretical distinction does not require that these processes operate as autonomous modules (cf. Fodor, 1983;Forster, 1978); rather it serves only to partition word perception into separate cognitive operations that are theoretically quite different.Over the last few years, there has been an increased interest in the processes that mediate perception of spoken words (Cole, 1980;Cole & Rudnicky, 1983) and three general findings have emerged from this work (see Cole & Jakimik, 1980;Foss & Blank...
This paper reports the results of three projects concerned with auditory word recognition and the structure of the lexicon. The first project was designed to experimentally test several specific predictions derived from MACS, a simulation model of the Cohort Theory of word recognition. Using a priming paradigm, evidence was obtained for acoustic-phonetic activation in word recognition in three experiments. The second project describes the results of analyses of the structure and distribution of words in the lexicon using a large lexical database. Statistics about similarity spaces for high and low frequency words were applied to previously published data on the intelligibility of words presented in noise. Differences in identification were shown to be related to structural factors about the specific words and the distribution of similar words in their neighborhoods. Finally, the third project describes efforts at developing a new theory of word recognition known as Phonetic Refinement Theory. The theory is based on findings from human listeners and was designed to incorporate some of the detailed acoustic-phonetic and phonotactic knowledge that human listeners have about the internal structure of words and the organization of words in the lexicon, and how, they use this knowledge in word recognition. Taken together, the results of these projects demonstrate a number of new and important findings about the relation between speech perception and auditory word recognition, two areas of research that have traditionally been approached from quite different perspectives in the past.
Two auditory lexical decision experiments were conducted to determine whether facilitation can be obtained when a prime and a target share word-initial phonological information. Subjects responded "word" or "nonword" to monosyllabic words and nonwords controlled for frequency. Each target was preceded by the presentation of either a word or nonword prime that was identical to the target or shared three, two, or one phonemes from the beginning. The results showed that lexical decision times decreased when the prime and target were identical for both word and nonword targets. However, no facilitation was observed when the prime and target shared three, two, or one initial phonemes. These results were found when the interstimulus interval between the prime and target was 500 msec or 50 msec. In a second experiment, no differences were found between primes and targets that shared three, one, or zero phonemes, although facilitation was observed for identical prime-target pairs. The results are compared to recent findings obtained using a perceptual identification paradigm. Taken together, the findings suggest several important differences in the way lexical decision and perceptual identification tasks tap into the information-processing system during auditory word recognition.Researchers concerned with issues in word recognition and lexical access have relied on the lexical decision paradigm to answer a number of fundamental questions about the representation of words in memory and the processes used to contact these representations in language processing. This paradigm requires subjects to determine as quickly as possible whether a stimulus item is a word or a nonword. Early research using lexical decision examined structural effects of visually presented lexical items on the speed of classifying these items as words or nonwords (Snodgrass & Jarvella, 1972;Stanners & Forbach, 1973;Stanners, Forbach, & Headley, 1971). In other research, the lexical decision task has been used to investigate the effects of frequency on classification time (Rubenstein, Garfield, & Millikan, 1970;Rubenstein, Lewis, & Rubenstein, 1971;Stanners, Jastrzembski, & Westbrook, 1975) and the status of morphologically related items in memory (Stanners, Neiser, Hernon, & Hall, 1979;Stanners, Neiser, & Painton, 1979;Taft & Forster, 1975, 1976.The basic design of the paradigm has also been extended to examine the priming effects of associated items on lexical decision times. Meyer and Schvaneveldt (1971) found that subjects were faster at classifying a letter string (e.g., DOCTOR) as a word if the preceding letter string was an associated word (e.g., NURSE) than if the preceding letter string was an unassociated word (e.g., BUTTER).The research reported here was supported by NIH Grant NS-12l79 to Indiana University in Bloomington. We would like to thank Paul A. Luce for assistance in recording the stimuli and for his comments on the manuscript. We also thank Joseph Sternberger for several suggestions. Requests for reprints should be sent to L. M. S...
Although research examining the use of prosodic information in the processing of spoken words has increased in recent years, results from these studies have been inconclusive. The present series of experiments systematically examines the importance of one prosodic variable (lexical stress) in the recognition of isolated spoken words. Data collected in an identification task suggest that segmental information may be more heavily relied upon when appropriate lexical stress information is not available. Results of subsequent reaction time experiments support the hypothesis that lexical stress influences the processing of auditorily presented words. In three shadowing experiments, correctly stressed items were produced faster than incorrectly stressed items, and in a lexical decision experiment, correctly stressed words were classified faster than incorrectly stressed words. Thus, this work provides evidence across several experimental tasks for the use of lexical stress information in the processing of spoken words. Moreover, the data suggest that lexical stress should be an important aspect of the representation of words in an interactive model of auditory word recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.