Myelination, the elaboration of myelin surrounding neuronal axons, is essential for normal brain function. The development of the myelin sheath enables rapid synchronized communication across the neural systems responsible for higher order cognitive functioning. Despite this critical role, quantitative visualization of myelination in vivo is not possible with current neuroimaging techniques including diffusion tensor and structural magnetic resonance imaging (MRI). Although these techniques offer insight into structural maturation, they reflect several different facets of development, e.g., changes in axonal size, density, coherence, and membrane structure; lipid, protein, and macromolecule content; and water compartmentalization. Consequently, observed signal changes are ambiguous, hindering meaningful inferences between imaging findings and metrics of learning, behavior or cognition. Here we present the first quantitative study of myelination in healthy human infants, from 3 to 11 months of age. Using a new myelin-specific MRI technique, we report a spatiotemporal pattern beginning in the cerebellum, pons, and internal capsule; proceeding caudocranially from the splenium of the corpus callosum and optic radiations (at 3-4 months); to the occipital and parietal lobes (at 4 -6 months); and then to the genu of the corpus callosum and frontal and temporal lobes (at 6 -8 months). Our results also offer preliminary evidence of hemispheric myelination rate differences. This work represents a significant step forward in our ability to appreciate the fundamental process of myelination, and provides the first ever in vivo visualization of myelin maturation in healthy human infancy.
SummaryAutism spectrum disorders (henceforth autism) are diagnosed in around 1% of the population [1]. Familial liability confers risk for a broad spectrum of difficulties including the broader autism phenotype (BAP) [2, 3]. There are currently no reliable predictors of autism in infancy, but characteristic behaviors emerge during the second year, enabling diagnosis after this age [4, 5]. Because indicators of brain functioning may be sensitive predictors, and atypical eye contact is characteristic of the syndrome [6–9] and the BAP [10, 11], we examined whether neural sensitivity to eye gaze during infancy is associated with later autism outcomes [12, 13]. We undertook a prospective longitudinal study of infants with and without familial risk for autism. At 6–10 months, we recorded infants' event-related potentials (ERPs) in response to viewing faces with eye gaze directed toward versus away from the infant [14]. Longitudinal analyses showed that characteristics of ERP components evoked in response to dynamic eye gaze shifts during infancy were associated with autism diagnosed at 36 months. ERP responses to eye gaze may help characterize developmental processes that lead to later emerging autism. Findings also elucidate the mechanisms driving the development of the social brain in infancy.
Human voices play a fundamental role in social communication, and areas of the adult "social brain" show specialization for processing voices and their emotional content (superior temporal sulcus, inferior prefrontal cortex, premotor cortical regions, amygdala, and insula). However, it is unclear when this specialization develops. Functional magnetic resonance (fMRI) studies suggest that the infant temporal cortex does not differentiate speech from music or backward speech, but a prior study with functional near-infrared spectroscopy revealed preferential activation for human voices in 7-month-olds, in a more posterior location of the temporal cortex than in adults. However, the brain networks involved in processing nonspeech human vocalizations in early development are still unknown. To address this issue, in the present fMRI study, 3- to 7-month-olds were presented with adult nonspeech vocalizations (emotionally neutral, emotionally positive, and emotionally negative) and nonvocal environmental sounds. Infants displayed significant differential activation in the anterior portion of the temporal cortex, similarly to adults. Moreover, sad vocalizations modulated the activity of brain regions involved in processing affective stimuli such as the orbitofrontal cortex and insula. These results suggest remarkably early functional specialization for processing human voice and negative emotions.
How specialized is the infant brain for processing voice within our environment? Research in adults suggests that portions of the temporal lobe play an important role in differentiating vocalizations from other environmental sounds; however, very little is known about this process in infancy. Recent research in infants has revealed discrepancies in the cortical location of voice-selective activation, as well as the age of onset of this response. The current study used functional near-infrared spectroscopy (fNIRS) to further investigate voice processing in awake 4-7-month-old infants. In listening to voice and non-voice sounds, there was robust and widespread activation in bilateral temporal cortex. Further, voice-selective regions of the bilateral anterior temporal cortex evidenced a steady increase in voice selective activation (voice > non-voice activation) over 4-7 months of age. These findings support a growing body of evidence that the emergence of cerebral specialization for human voice sounds evolves over the first 6 months of age.
Children with autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD) demonstrate face processing abnormalities that may underlie social impairment. Despite substantial overlap between ASD and ADHD, ERP markers of face and gaze processing have not been directly compared across pure and comorbid cases. Children with ASD (n=19), ADHD (n=18), comorbid ASD+ADHD (n=29) and typically developing (TD) controls (n=26) were presented with upright/inverted faces with direct/averted gaze, with concurrent recording of the P1 and N170 components. While the N170 was predominant in the right hemisphere in TD and ADHD, children with ASD (ASD/ASD+ADHD) showed a bilateral distribution. In addition, children with ASD demonstrated altered response to gaze direction on P1 latency and no sensitivity to gaze direction on midline-N170 amplitude compared to TD and ADHD. In contrast, children with ADHD (ADHD/ASD+ADHD) exhibited a reduced face inversion effect on P1 latency compared to TD and ASD. These findings suggest children with ASD have specific abnormalities in gaze processing and altered neural specialisation, whereas children with ADHD show abnormalities at early visual attention stages. Children with ASD+ADHD are an additive co-occurrence with deficits of both disorders. Elucidating the neural basis of the overlap between ASD and ADHD is likely to inform aetiological investigation and clinical assessment.
Adults diagnosed with autism spectrum disorder (ASD) show a reduced sensitivity (degree of selective response) to social stimuli such as human voices. In order to determine whether this reduced sensitivity is a consequence of years of poor social interaction and communication or is present prior to significant experience, we used functional MRI to examine cortical sensitivity to auditory stimuli in infants at high familial risk for later emerging ASD (HR group, N = 15), and compared this to infants with no family history of ASD (LR group, N = 18). The infants (aged between 4 and 7 months) were presented with voice and environmental sounds while asleep in the scanner and their behaviour was also examined in the context of observed parent–infant interaction. Whereas LR infants showed early specialisation for human voice processing in right temporal and medial frontal regions, the HR infants did not. Similarly, LR infants showed stronger sensitivity than HR infants to sad vocalisations in the right fusiform gyrus and left hippocampus. Also, in the HR group only, there was an association between each infant's degree of engagement during social interaction and the degree of voice sensitivity in key cortical regions. These results suggest that at least some infants at high-risk for ASD have atypical neural responses to human voice with and without emotional valence. Further exploration of the relationship between behaviour during social interaction and voice processing may help better understand the mechanisms that lead to different outcomes in at risk populations.
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.