We report three experiments that combine the masked priming paradigm with the recording of eventrelated potentials in order to examine the time-course of cross-modal interactions during word recognition. Visually presented masked primes preceded either visually or auditorily presented targets that were or were not the same word as the prime. Experiment 1 used the lexical decision task, and in Experiments 2 and 3 participants monitored target words for animal names. The results show a strong modulation of the N400 and an earlier ERP component (N250 ms) in within-modality (visual-visual) repetition priming, and a much weaker and later N400-like effect (400-700 ms) in the cross-modal (visual-auditory) condition with prime exposures of 50 ms (Experiments 1 & 2). With a prime duration of 67 ms (Experiment 3), cross-modal ERP priming effects arose earlier during the traditional N400 epoch (300-500 ms) and were also larger overall than at the shorter prime duration.Keywords word recognition; cross-modal priming; event-related potentials
Masked Cross-Modal Repetition Priming: An Event-Related Potential InvestigationThere is increasing evidence from studies of word recognition in both the visual and auditory modalities that information associated with the non-presented modality can affect the recognition process. Thus, for example, it is now commonly accepted that recognition of a printed word is influenced by information concerning its pronunciation (e.g., Frost, 1998). This accumulation of empirical evidence has led to the development of models of word recognition that allow strong interactivity across modality-specific representations. For example, Grainger and Ferrand (1994;, proposed an extension of McClelland and Rumelhart's (1981) interactive-activation model that included both sublexical and lexical-level connections between orthographic and phonological representations (a recent version of this model is shown in Figure 1). In the bimodal interactive-activation model described in Figure 1, presentation of a visual word stimulus generates activation in orthographic codes, which rapidly activate the corresponding phonological codes and subsequently influence the recognition process. The same holds for auditory word recognition, where phonological codes rapidly activate the corresponding orthographic representations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.