Neural correlates of semantic processing in reading aloudFor over a century, beginning at least as early as Cattell (1886), experimental psychologists have investigated the mental processes involved in reading single words aloud. For writing systems such as English, the task is challenging because of the many inconsistencies in the correspondences between spelling and sound. Early investigations into the neural systems supporting this task relied on autopsy studies, which established some of the critical brain regions responsible for acquired alexia (Déjerine, 1892). Integration of experimental and anatomical data, on the other hand, began comparatively recently, spurred in particular by the advent of non-invasive functional brain imaging. Reading aloud is thought to involve a combination of orthographic (visual word form), phonological (word sound), and perhaps semantic (word meaning) information processing. Although the necessity for orthographic and phonological processing is undisputed, the degree to which semantic information is recruited to aid in reading aloud is a matter of debate. One possibility is that, for example, words with unusual correspondences between spelling and sound (e.g., YACHT, COLONEL) might benefit more from the collateral recruitment of semantic codes than words with more regular spellingsound correspondences (Plaut, McClelland, Seidenberg, & Patterson, 1996).Although there is agreement that semantic information is not always necessary for singleword reading aloud, two major cognitive and computational models posit very different roles for semantic processing in this task. Dual-route models, such as the dual-route cascaded (DRC) model, propose two separate pathways, one that implements a set of grapheme-phoneme correspondence (GPC) rules for mapping letter combinations to sound combinations (Coltheart,