A strong phonological theory of reading is proposed and discussed. The first claim of this article is that current debates on word recognition are often based on different axioms regarding the cognitive structures of the mental lexicon rather than conflicting empirical evidence. These axioms lead to different interpretations of the same data. It is argued that once the implicit axioms of competing theories in visual word recognition are explicated, a strong phonological model presents a viable and coherent approach. The assumptions underlying a strong phonological theory of reading are outlined, and 4 theoretical questions are examined: Is phonological recoding a mandatory phase of print processing? Is phonology necessary for lexical access? Is phonology necessary for accessing meaning? How can phonology be derived from orthographic structure? These issues are integrated into a general theory that is constrained by all of the findings.
We investigated the psychological reality ofthe concept of orthographical depth and its influence on visual word recognition by examining naming performance in Hebrew, English, and Serbo-Croatian. We ran three sr of experiments in which we used native speakers and identical experimental methods in each language. Experiment 1 revealed that the lexical status ofthe stimulus (high-frequency words, low-frequency words, and nonwords) significantly affected naming in Hebrew (the deepest of the three orthographies). This effect was only moderate in English and nonsignificant in Serbo-Croatian (the shallowest of the three orthographies). Moreover, only in Hebrew did lexical status have similar effects on naming and lexical decision performance. Experiment 2 revealed that semantic priming effects in naming were larger in Hebrew than in English and completely absent in Serbo-Croatian. Experiment 3 revealed that a large proportion of nonlexical tokens (nonwords) in the stimulus list affects naming words in Hebrew and in English, but not in Serbo-Croatian. These resuits were interpreted as strong support for the orthographical depth hypothesis and suggest, in general, that in shallow orthographies phonology is generated dirr from print, whereas in deep orthographies phonology is derived from the internal lexicon.of Experimental Psychology: Learning, Memo~ and Cognition, 8,[385][386][387][388][389][390][391][392][393][394][395][396][397][398][399]
Statistical learning is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. Recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal, however, modality and stimulus specificity. An important question is, therefore, how and why a hypothesized domain-general learning mechanism systematically produces such effects. We offer a theoretical framework according to which statistical learning is not a unitary mechanism, but a set of domain-general computational principles, that operate in different modalities and therefore are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility.
Hebrew-English cognates (translations similar in meaning and form) and noncognates (translations similar in meaning only) were examined in masked translation priming. Enhanced priming for cognates was found with L1 (dominant language) primes, but unlike previous results, it was not found with L2 (nondominant language) primes. Priming was also obtained for noncognates, whereas previous studies showed unstable effects for such stimuli. The authors interpret the results in a dual-lexicon model by suggesting that (a) both orthographic and phonological overlap are needed to establish shared lexical entries for cognates (and hence also symmetric cognate priming), and (b) script differences facilitate rapid access by providing a cue to the lexical processor that directs access to the proper lexicon, thus producing stable noncognate priming. The asymmetrical cognate effect obtained with different scripts may be attributed to an overreliance on phonology in L2 reading.
All Hebrew words are composed of 2 interwoven morphemes: a triconsonantal root and a phonological word pattern. the lexical representations of these morphemic units were examined using masked priming. When primes and targets shared an identical word pattern, neither lexical decision nor naming of targets was facilitated. In contrast root primes facilitated both lexical decisions and naming of target words that were derived from these roots. This priming effect proved to be independent of meaning similarity because no priming effects were found when primes and targets were semantically but not morphologically related. These results suggest that Hebrew roots are lexical units whereas word patterns are not. A working model of lexical organization in Hebrew is offered on the basis of these results.
In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding, have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter-order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present paper discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is not a general property of the cognitive system, neither it is a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed.
Most research in Statistical Learning (SL) has focused on mean success rate of participants in detecting statistical contingencies at a group level. In recent years, however, researchers show increased interest in individual abilities in SL, either to predict other cognitive capacities or as a tool for understanding the mechanism underlying SL. Most, if not all of this research enterprise employs SL tasks that were originally designed for group-level studies. We argue that from an individual difference perspective, such tasks are psychometrically weak and sometimes even flawed. In particular, existing SL tasks have three major shortcomings: (1) the number of trials in the test phase is often too small (or, there is extensive repetitions of the same targets throughout the test), (2) a large proportion of the sample performs at chance level so that most of the data points reflect noise, and (3) test items following familiarization are all of the same type and identical level of difficulty. These factors lead to high measurement error, inevitably resulting in low reliability and thereby doubtful validity. Here we present a novel method specifically designed for the measurement of individual differences in visual SL. The novel task we offer displays substantially superior psychometric properties. We report data regarding the reliability of the task, and discuss the importance of the implementation of such tasks in future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.