In the domain of working memory, recent theories postulate that the maintenance of serial order is driven by position marking. According to this idea, serial order is maintained though associations of each item with an independent representation of the position that the item constitutes in the sequence. Recent studies suggest that those position markers are spatial in nature, with the beginning items associated with left side and the end elements with the right side of space (i.e., the ordinal position effect). So far however, it is unclear whether serial order is coded along the same principles in the verbal and the visuospatial domain. The aim of the current study was to investigate whether serial order is coded in a domain general fashion or not. To unravel this question, 6 experiments were conducted. The first 3 experiments revealed that the ordinal position effect is found with verbal but not with spatial information. In the subsequent experiments, the authors isolated the origin of this dissociation and conclude that to obtain spatial coding of serial order, it is not the nature of the encoded information (verbal, visual, or spatial) that is crucial, but whether the memoranda are semantically processed or not. This work supports the idea that serial order is coded in a domain general fashion, but suggests that position markers are only spatially coded when the to-be-remembered information is processed at the semantic level.
a b s t r a c tThe processes and the cues determining the orthographic structure of polysyllabic words remain far from clear. In the present study, we investigated the role of letter category (consonant vs. vowels) in the perceptual organization of letter strings. In the syllabic counting task, participants were presented with written words matched for the number of spoken syllables and comprising either one vowel cluster less than the number of syllables (hiatus words, e.g., pharaon) or the same number of vowel clusters (e.g., parodie). Relative to control words, readers were slower and less accurate for hiatus words, for which they systematically underestimated the number of syllables (Experiment 1). The effect was stronger when the instructions emphasized response speed (Experiment 2) and when concurrent articulation was used (Experiment 3), and the effect did not stem from phonological structure (Experiment 4). Furthermore, hiatus words were more slowly and less accurately pronounced than control ones (Experiment 5). Finally, in lexical decision, opposite effects occurred as a function of word length, with shorter words producing a facilitatory effect and longer words showing interference (Experiment 6). Taken together, the results show that perceptual units extracted from visual letter strings are influenced by the orthographic status of letters. We discuss the implications of such findings in view of current theories of visual word recognition.Ó 2012 Elsevier Inc. All rights reserved. IntroductionWhat do we see when we read a word? What are the functional units of word perception? These questions opened a paper by Santa, Santa, and Smith in 1977, who stated that the issue 'has generated an impressive body of literature in the last 75 years, but there is little agreement on an answer' (p. 585). More than 30 years later, the situation has hardly changed, and the claim still holds true.The issue of orthographic coding in the perception of letter strings has been of interest since the earliest times of research on reading (Huey, 1908). The interest keeps on being high currently because understanding the basic processes of visual word recognition constitutes a keystone of any theory of reading. Visual word recognition is not performed letter by letter, but rather operates on larger letter chunks that are processed simultaneously. Hence, a recurrent question in the field is to determine what processing units are involved in the early steps of written word identification and how the perceptual processing system organizes letter strings into larger units. In the present paper, we report a set of studies aimed at exploring the role of letter category (consonant vs. vowel letters) in determining the internal structure of polysyllabic words.The issue of perceptual units has been approached from different angles according to periods and dominant trends in the field. Below, we present an overview of the major approaches and then discuss their relevance for the perceptual processing of polysyllabic words, which is of sp...
Understanding the front end of visual word recognition requires us to identify the processes by which letters are identified. Since most of the work on letter recognition has been conducted in English, letter perception modeling has been limited to the 26 letters of the Latin alphabet. However, many writing systems include letters with diacritic marks. In the present study, we examined whether diacritic letters are a mere variant of their base letter, and thus share the same abstract representation, or whether they function as separate elements from any other letters, and thus have separate representations. In Experiments 1A and 1B, participants performed an alphabetical decision task combined with masked priming. Target letters were preceded by the same letter (e.g., a-A), by a diacritic letter (e.g., â-A), or by an unrelated letter (e.g., z-A). The results showed that the primes sharing nominal identity (e.g., a) facilitated target processing as compared to unrelated primes (e.g., z), but that primes that included a diacritic mark (e.g., â) did not, with reaction times being similar to those in the unrelated priming condition. In Experiment 2 we replicated these results in a lexical decision task. Overall, this demonstrates that as long as diacritics are used in scripts to distinguish between lexical entries, the diacritic letters are not mere variants of their base letters but constitute unitary elements of the script in their own right, with diacritics contributing to the overall visual shape of a letter.
This study investigated the role of the syllable in visual recognition of French words. The syllable congruency procedure was combined with masked priming in the lexical-decision task (Experiments 1 and 3) and the naming task (Experiment 2). Target words were preceded by a nonword prime sharing the first three letters that either corresponded to the syllable (congruent condition), or not (incongruent condition). When primes were displayed for 67 ms, similar results were found in both the lexical decision and the naming tasks. Consonant-vowel targets such as BA.LANCE were recognised more rapidly in the congruent condition than in the incongruent and control conditions, while consonant-vowel-consonant targets such as BAL.CON were recognised more rapidly in the congruent and incongruent conditions than in the control condition. When a 43-ms SOA was used in the lexical-decision task, no significant priming effect was obtained. The results are discussed in an interactive-activation model incorporating syllable units.Keywords: syllable congruency, lexical decision, naming, masked priming, CV versus CVC syllables In recent decades, numerous studies have shown that phonological information is automatically activated during visual word recognition (see Frost, 1998, for a review). Masked priming (Forster & Davis, 1984) is a widely used paradigm to study phonological effects (e.g., Frost, Ahissar, Gotesman, & Tayeb, 2003;Grainger, Diependaele, Spinelli, Ferrand, & Farioli, 2003;Lukatela, Frost, & Turvey, 1998;Pollatsek, Perea, & Carreiras, 2005;Rastle & Brysbaert, 2006;Shen & Forster, 1999). Besides avoiding strategic processes from participants (Forster, 1998), this paradigm has made it possible to demonstrate that phonological effects are not confounded with orthographic activation (e.g., Frost et al., 2003;Lukatela et al., 1998). Moreover, phonological effects were obtained in tasks that did not involve postlexical phonological units, suggesting that these phonological effects arose from prelexical and lexical processes rather than articulatory processes (e.g., Lukatela et al., 1998).To take into account robust phonological effects, models of visual word recognition have to include a phonological coding of visual inputs. This feature requires determining which phonological units are activated during silent reading. In languages with clear syllable boundaries like Spanish, data have shown that syllables are involved in the processing of polysyllabic words (e.g., Alvarez, Carreiras, & Perea, 2004;Carreiras, Alvarez, & de Vega, 1993;Carreiras & Perea, 2002;Perea & Carreiras, 1998). In French also, there is evidence for the activation of syllable units during lexical access (e.g., Carreiras, Ferrand, Grainger, & Perea, 2005;Doignon & Zagar, 2005;Mathey & Zagar, 2002;Mathey, Zagar, Doignon, & Seigneuric, 2006). However, the results are less consistent than in Spanish since some studies failed to obtain syllabic effects during the processing of French words (e.g., Brand, Rey, & Peereman, 2003;Rouibah & Taft, 2001). To account...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.