The present study used event-related potentials (ERPs) to examine the time course of orthographic and phonological priming in the masked priming paradigm. Participants monitored visual target words for occasional animal names, and ERPs to nonanimal critical items were recorded. These critical items were preceded by different types of primes: Orthographic priming was examined using transposed-letter (TL) primes (e.g., barin-BRAIN) and their controls (e.g., bosin-BRAIN); phonological priming was examined using pseudohomophone primes (e.g., brane-BRAIN) and their controls (e.g., brant-BRAIN). Both manipulations modulated the N250 ERP component, which is hypothesized to reflect sublexical processing during visual word recognition. Orthographic (TL) priming and phonological (pseudohomophone) priming were found to have distinct topographical distributions and different timing, with orthographic effects arising earlier than phonological effects.Evidence concerning the relative timing of component processes provides a fundamental constraint for models of visual word recognition. Such time-course analyses are an important addition to the many studies that have examined each component process separately. Evidence for rapid activation of phonological codes, for example, has been obtained repeatedly with the masked priming paradigm and brief prime durations (e.g., Carreiras, Ferrand, Grainger, & Perea, 2005;Frost, Ahissar, Gotesman, & Tayeb, 2003;Lukatela & Turvey, 1994;Perfetti & Bell, 1991; see Rastle & Brysbaert, in press, for review). However, direct comparisons of orthographic and phonological priming are less abundant (e.g., Ferrand & Grainger, 1992Grainger & Ferrand, 1996;Ziegler, Ferrand, Jacobs, Rey, & Grainger, 2000). One such study is particularly relevant to the experiment we report here. Using the masked priming paradigm, Ferrand and Grainger (1993) varied both prime exposure duration and the amount of orthographic and phonological overlap between primes and targets. Orthographic priming emerged with a prime duration of 33 ms, whereas phonological priming required 67 ms of prime exposure to be fully established (see Perfetti & Tan, 1998, for a similar pattern in Chinese).This time-course pattern is consistent with the results of studies manipulating the relative position of letters shared by prime and target. The primes in these experiments have included subset primes (e.g., grdn- GARDEN-Grainger, Granier, Farioli, Van Assche, & van Heuven, 2006;Peressotti & Grainger, 1999), superset primes (e.g., gafrsden-GARDEN -Van , and transposed-letter (TL) primes (e.g., gadren-GARDEN - Perea & Lupker, 2004;Schoonbaert & Grainger, 2004). All these studies point to an early phase of orthographic processing that is not influenced by phonology. Thus, Grainger et al. The phonological TL condition (relobucion-REVOLUCIÓN) produced response latencies in a lexical decision task that did not differ significantly from those in the orthographic control condition (reloducion-REVOLUCIÓN), and were significantly slower than t...
How do comprehenders build up overall meaning representations of visual real-world events? This question was examined by recording event-related potentials (ERPs) while participants viewed short, silent movie clips depicting everyday events. In two experiments, it was demonstrated that presentation of the contextually inappropriate information in the movie endings evoked an anterior negativity. This effect was similar to the N400 component whose amplitude has been previously reported to inversely correlate with the strength of semantic relationship between the context and the eliciting stimulus in word and static picture paradigms. However, a second, somewhat later, ERP component—a posterior late positivity—was evoked specifically when target objects presented in the movie endings violated goal-related requirements of the action constrained by the scenario context (e.g., an electric iron that does not have a sharp-enough edge was used in place of a knife in a cutting bread scenario context). These findings suggest that comprehension of the visual real world might be mediated by two neurophysiologically distinct semantic integration mechanisms. The first mechanism, reflected by the anterior N400-like negativity, maps the incoming information onto the connections of various strengths between concepts in semantic memory. The second mechanism, reflected by the posterior late positivity, evaluates the incoming information against the discrete requirements of real-world actions. We suggest that there may be a tradeoff between these mechanisms in their utility for integrating across people, objects, and actions during event comprehension, in which the first mechanism is better suited for familiar situations, and the second mechanism is better suited for novel situations.
We report three experiments that combine the masked priming paradigm with the recording of eventrelated potentials in order to examine the time-course of cross-modal interactions during word recognition. Visually presented masked primes preceded either visually or auditorily presented targets that were or were not the same word as the prime. Experiment 1 used the lexical decision task, and in Experiments 2 and 3 participants monitored target words for animal names. The results show a strong modulation of the N400 and an earlier ERP component (N250 ms) in within-modality (visual-visual) repetition priming, and a much weaker and later N400-like effect (400-700 ms) in the cross-modal (visual-auditory) condition with prime exposures of 50 ms (Experiments 1 & 2). With a prime duration of 67 ms (Experiment 3), cross-modal ERP priming effects arose earlier during the traditional N400 epoch (300-500 ms) and were also larger overall than at the shorter prime duration.Keywords word recognition; cross-modal priming; event-related potentials Masked Cross-Modal Repetition Priming: An Event-Related Potential InvestigationThere is increasing evidence from studies of word recognition in both the visual and auditory modalities that information associated with the non-presented modality can affect the recognition process. Thus, for example, it is now commonly accepted that recognition of a printed word is influenced by information concerning its pronunciation (e.g., Frost, 1998). This accumulation of empirical evidence has led to the development of models of word recognition that allow strong interactivity across modality-specific representations. For example, Grainger and Ferrand (1994;, proposed an extension of McClelland and Rumelhart's (1981) interactive-activation model that included both sublexical and lexical-level connections between orthographic and phonological representations (a recent version of this model is shown in Figure 1). In the bimodal interactive-activation model described in Figure 1, presentation of a visual word stimulus generates activation in orthographic codes, which rapidly activate the corresponding phonological codes and subsequently influence the recognition process. The same holds for auditory word recognition, where phonological codes rapidly activate the corresponding orthographic representations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.