Learning to associate auditory information of speech sounds with visual information of letters is a first and critical step for becoming a skilled reader in alphabetic languages. Nevertheless, it remains largely unknown which brain areas subserve the learning and automation of such associations. Here, we employ functional magnetic resonance imaging to study letter-speech sound integration in children with and without developmental dyslexia. The results demonstrate that dyslexic children show reduced neural integration of letters and speech sounds in the planum temporale/Heschl sulcus and the superior temporal sulcus. While cortical responses to speech sounds in fluent readers were modulated by letter-speech sound congruency with strong suppression effects for incongruent letters, no such modulation was observed in the dyslexic readers. Whole-brain analyses of unisensory visual and auditory group differences additionally revealed reduced unisensory responses to letters in the fusiform gyrus in dyslexic children, as well as reduced activity for processing speech sounds in the anterior superior temporal gyrus, planum temporale/Heschl sulcus and superior temporal sulcus. Importantly, the neural integration of letters and speech sounds in the planum temporale/Heschl sulcus and the neural response to letters in the fusiform gyrus explained almost 40% of the variance in individual reading performance. These findings indicate that an interrelated network of visual, auditory and heteromodal brain areas contributes to the skilled use of letter-speech sound associations necessary for learning to read. By extending similar findings in adults, the data furthermore argue against the notion that reduced neural integration of letters and speech sounds in dyslexia reflect the consequence of a lifetime of reading struggle. Instead, they support the view that letter-speech sound integration is an emergent property of learning to read that develops inadequately in dyslexic readers, presumably as a result of a deviant interactive specialization of neural systems for processing auditory and visual linguistic inputs.
Developmental dyslexia is a specific reading and spelling deficit affecting 4% to 10% of the population. Advances in understanding its origin support a core deficit in phonological processing characterized by difficulties in segmenting spoken words into their minimally discernable speech segments (speech sounds, or phonemes) and underactivation of left superior temporal cortex. A suggested but unproven hypothesis is that this phonological deficit impairs the ability to map speech sounds onto their homologous visual letters, which in turn prevents the attainment of fluent reading levels. The present functional magnetic resonance imaging (fMRI) study investigated the neural processing of letters and speech sounds in unisensory (visual, auditory) and multisensory (audiovisual congruent, audiovisual incongruent) conditions as a function of reading ability. Our data reveal that adult dyslexic readers underactivate superior temporal cortex for the integration of letters and speech sounds. This reduced audiovisual integration is directly associated with a more fundamental deficit in auditory processing of speech sounds, which in turn predicts performance on phonological tasks. The data provide a neurofunctional account of developmental dyslexia, in which phonological processing deficits are linked to reading failure through a deficit in neural integration of letters and speech sounds.
Background: According to the traditional two-stage model of face processing, the face-specific N170 event-related potential (ERP) is linked to structural encoding of face stimuli, whereas later ERP components are thought to reflect processing of facial affect. This view has recently been challenged by reports of N170 modulations by emotional facial expression. This study examines the time-course and topography of the influence of emotional expression on the N170 response to faces.
This paper contains an error with respect to the Talairach (TAL) coordinates plotted in Figure 4 to indicate the position of the activation from posterior to anterior superior temporal gyrus (STG). These posterior-anterior TAL points (numbers plotted on the x axis) were accidentally reversed in the published manuscript.
Reading instruction can direct attention to different unit sizes in print-to-speech mapping, ranging from grapheme-phoneme to whole-word relationships. Thus, attentional focus during learning might influence brain mechanisms recruited during reading, as indexed by the N170 response to visual words. To test this, two groups of adults were trained to read an artificial script under instructions directing attention to grapheme-phoneme versus whole-word associations. N170 responses were subsequently contrasted within an active reading task. Grapheme-phoneme focus drove a left-lateralized N170 response relative to the right-lateralized N170 under whole-word focus. These findings suggest a key role for attentional focus in early reading acquisition.
Adults produce left-lateralized N170 responses to visual words relative to control stimuli, even within tasks that do not require active reading. This specialization begins in preschoolers as a right-lateralized N170 effect. We investigated whether this developmental shift reflects an early learning phenomenon, such as attaining visual familiarity with a script, by training adults in an artificial script and measuring N170 responses before and afterward. Training enhanced the N170 response, especially over the right hemisphere. This suggests N170 sensitivity to visual familiarity with a script before reading becomes sufficiently automatic to drive left-lateralized effects in a shallow encoding task.
Letters and speech sounds are the basic units of correspondence between spoken and written language. Associating auditory information of speech sounds with visual information of letters is critical for learning to read; however, the neural mechanisms underlying this association remain poorly understood. The present functional magnetic resonance imaging study investigates the automaticity and behavioral relevance of integrating letters and speech sounds. Within a unimodal auditory identification task, speech sounds were presented in isolation (unimodally) or bimodally in congruent and incongruent combinations with visual letters. Furthermore, the quality of the visual letters was manipulated parametrically. Our analyses revealed that the presentation of congruent visual letters led to a behavioral improvement in identifying speech sounds, which was paralleled by a similar modulation of cortical responses in the left superior temporal sulcus. Under low visual noise, cortical responses in superior temporal and occipito-temporal cortex were further modulated by the congruency between auditory and visual stimuli. These cross-modal modulations of performance and cortical responses during an unimodal auditory task (speech identification) indicate the existence of a strong and automatic functional coupling between processing of letters (orthography) and speech (phonology) in the literate adult brain.
BackgroundEfficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation.ResultsThe results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency.ConclusionsThese results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and demonstrate that fMR-A is sensitive to multisensory congruency effects that may not be revealed in BOLD amplitude per se.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.