Perceptual identification of spoken words in noise is less accurate when the target words are preceded by spoken phonetically related primes (Goldinger, Luce, & Pisoni, 1989).The present investigation replicated and extended this finding. Subjects shadowed target words presented in the clear that were preceded by phonetically related or unrelated primes. In addition, primes were either higher or lower in frequency than the target words. Shadowing latencies were significantly longer for target words preceded by phonetically related primes, but only when the prime-target interstimulus interval was short (50 vs. 500msec). These results demonstrate that phonetic priming does not depend on target degradation and that it affects processing time. We further demonstrated that PARSYN-a connectionist instantiation ofthe neighborhood activation model-accurately simulates the observed pattern of priming.Virtually all current models of spoken word recognition share the assumption that the perception of spoken words involves two fundamental processes: activation and competition (see P. A. Luce & Pisoni, 1998;Marslen-Wilson, 1989;McClelland & Elman, 1986;Norris, 1994). In such activation-competition models, the hallmark of the discrimination process is competition among multiple representations of words activated in memory. As a result, the role of competition has been a primary focus of research and theory on spoken word recognition in the last few years (e.g., Cluff & Luce, 1990;Goldinger, Luce, & Pisoni, 1989;Marslen-Wilson, 1989;McQueen, Norris, & Cutler, 1994;Norris, McQueen, & Cutler, 1995;Vitevitch & Luce, 1998, 1999.One example of an activation-competition model is the neighborhood activation model (NAM; P. A. Luce & Pisoni, 1998). According to NAM, stimulus input activates a set (or neighborhood) of acoustic-phonetic patterns in This research was supported in part by Research Grants ROI DC 026580 I-A2 and R29 DC 02629-03 from the National Institute on Deafness and Other Communication Disorders. National Institutes of Health . We thank Dennis Norris and an anonymous reviewer for their advice and comments. We also thank Jim Sawusch for many helpful discussions and Michael S. Cluff for his assistance in running subjects. Correspondence concerning this article should be addressed to P A. Luce, Department of Psychology, State University of New York, Buffalo, NY 14260 (e-mail: paul@deuro.fss.buffalo.edu).memory. Patterns are activated to the degree to which they match the stimulus input (see Marslen-Wilson, 1989, Morton, 1969, for similar proposals). These acoustic-phonetic patterns then activate a system of word decision units that are tuned to the patterns. Throughout the recognition process, the word decision units monitor three sources of information: (1) the activation levels of the acoustic-phonetic patterns to which the units are tuned, (2) higher level lexical information (specifically, lexical frequency), and (3) the overall level of activity in the entire system of decision units. It is assumed that each of the de...
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed.
Neuroplastic changes in auditory cortex as a result of lifelong perceptual experience were investigated. Adults with early-onset deafness and long-term hearing aid experience were hypothesized to have undergone auditory cortex plasticity due to somatosensory stimulation. Vibrations were presented on the hand of deaf and normal-hearing participants during functional MRI. Vibration stimuli were derived from speech or were a fixed frequency. Higher, more widespread activity was observed within auditory cortical regions of the deaf participants for both stimulus types. Life-long somatosensory stimulation due to hearing aid use could explain the greater activity observed with deaf participants.
The present results are consistent with the results of Bernstein et al. (2000). The need to rely on visual speech throughout life, and particularly for the acquisition of spoken language by individuals with early-onset hearing loss, can lead to enhanced speechreading ability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.