The authors argue that perception is Bayesian inference based on accumulation of noisy evidence and that, in masked priming, the perceptual system is tricked into treating the prime and the target as a single object. Of the 2 algorithms considered for formalizing how the evidence sampled from a prime and target is combined, only 1 was shown to be consistent with the existing data from the visual word recognition literature. This algorithm was incorporated into the Bayesian Reader model (D. Norris, 2006), and its predictions were confirmed in 3 experiments. The experiments showed that the pattern of masked priming is not a fixed function of the relations between the prime and the target but can be changed radically by changing the task from lexical decision to a same-different judgment. Implications of the Bayesian framework of masked priming for unconscious cognition and visual masking are discussed.
A prime generated by transposing two internal letters (e.g., jugde) produces strong priming of the original word (judge). In lexical decision, this transposed-letter (TL) priming effect is generally weak or absent for nonword targets; thus, it is unclear whether the origin of this effect is lexical or prelexical. The authors describe the Bayesian Reader theory of masked priming (D. Norris & S. Kinoshita, 2008), which explains why nonwords do not show priming in lexical decision but why they do in the cross-case same-different task. This analysis is followed by 3 experiments that show that priming in this task is not based on low-level perceptual similarity between the prime and target, or on phonology, to make the case that priming is based on prelexical orthographic representation. The authors then use this task to demonstrate equivalent TL priming effects for nonwords and words. The results are interpreted as the first reliable evidence based on the masked priming procedure that letter position is not coded absolutely within the prelexical, orthographic representation. The implications of the results for current letter position coding schemes are discussed.
The goal of research on how letter identity and order are perceived during reading is often characterized as one of "cracking the orthographic code." Here, we suggest that there is no orthographic code to crack: Words are perceived and represented as sequences of letters, just as in a dictionary. Indeed, words are perceived and represented in exactly the same way as other visual objects. The phenomena that have been taken as evidence for specialized orthographic representations can be explained by assuming that perception involves recovering information that has passed through a noisy channel: the early stages of visual perception. The noisy channel introduces uncertainty into letter identity, letter order, and even whether letters are present or absent. We develop a computational model based on this simple principle and show that it can accurately simulate lexical decision data from the lexicon projects in English, French, and Dutch, along with masked priming data that have been taken as evidence for specialized orthographic representations.
Two experiments were performed to investigate the nature of the masked onset priming effect in naming, that is, the facilitation in naming latency that is observed when a target shares the initial grapheme/phoneme with a masked prime. Experiment 1 showed that the effect is not due to positionindependent letter priming, since the naming of nonword targets preceded by masked primes was facilitated only if the prime shared the initial letter with the target (e.g., SUf-SIB) and not if the prime shared the fmalletter (e.g., mub-sIB). Experiment 2 showed that the effect reflects the sharing of onsets rather than the initial letter, since facilitation due to an overlap of the initial letter was observed only for the simple onset target (e.g., pennY-PASTE) for which the letter corresponded to the onset, and not for complex onset targets (e.g., binga-BLIss). It is argued that the serial nature of the masked onset priming effect is best interpreted as the planning of articulation, rather than as the computation of phonology from orthography.Research on visual word recognition is currently dominated by computational models of reading aloud. The three main implementations are the parallel distributed processing (PDP) model proposed by Plaut, McClelland, Seidenberg, and Patterson (1996); the dual-route cascaded (DRC) model proposed by Coltheart and colleagues (Coltheart, Curtis, Atkins, & Haller, 1993;Coltheart & Rastle, 1994); and the parallel dual-route model proposed recently by Zorzi and colleagues (Zorzi, Houghton, & Butterworth, 1998). These models primarily differ in the assumed existence of common or distinct routines for computing phonology for words and nonwords, and in whether the computation of phonology occurs in parallel, or sequentially, across the letter string. All of these models can account for the empirical findings that have become benchmarks for models of word recognition, such as the word frequency effect (faster responses to words that occur more frequently in print); the regularity effect (words that do not follow the standard spellingto-sound correspondence rules such as pint are named more slowly than words that do, such as pink); and the frequency-by-regularity interaction (the regularity effect is greater for low-frequency words than for high-frequency words).Of these models, the DRC model is the only one that incorporates a sequential computational assumption. I That is, all other models (Plaut et aI., 1996; Zorzi et aI.,The research reported in this article was supported by a Macquarie University Research Grant to the author. Thanks are due Karren Towgood for research assistance. I am also grateful to the action editor, Ken Forster, and the reviewers Ludovic Ferrand, Alan Kawamoto, and an anonymous reviewer for their comments on an earlier version of the paper. Correspondence should be addressed to S. Kinoshita, Department of Psychology, Macquarie University, Sydney, NSW, Australia, 2109 (e-mail: sachiko.kinoshita@mq.edu.au).1998) assume that the derivation ofphonology from print occurs in pa...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.