Magnetoencephalography (MEG) is known for its temporal precision and good spatial resolution in cognitive brain research. Nonetheless, it is still rarely used in developmental research, and its role in developmental cognitive neuroscience is not adequately addressed. The current review focuses on the source analysis of MEG measurement and its potential to answer critical questions on neural activation origins and patterns underlying infants’ early cognitive experience. The advantages of MEG source localization are discussed in comparison with functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS), two leading imaging tools for studying cognition across age. Challenges of the current MEG experimental protocols are highlighted, including measurement and data processing, which could potentially be resolved by developing and improving both software and hardware. A selection of infant MEG research in auditory, speech, vision, motor, sleep, cross-modality, and clinical application is then summarized and discussed with a focus on the source localization analyses. Based on the literature review and the advancements of the infant MEG systems and source analysis software, typical practices of infant MEG data collection and analysis are summarized as the basis for future developmental cognitive research.
Mandarin-speaking adults using cochlear implants (CI) experience more difficulties in perceiving lexical tones than consonants. This problem may result from the fact that CIs provide relatively sufficient temporal envelope information for consonant perception in quiet environments, but do not convey the fine spectro-temporal information considered to be necessary for accurate pitch perception. Another possibility is that Mandarin speakers with post-lingual hearing loss have developed languagespecific use of these acoustic cues, impeding lexical tone processing under CI conditions. To investigate this latter hypothesis, syllable discrimination and word identification abilities for Mandarin consonants (place and manner) and lexical-tone contrasts (tones 1 vs. 3 and 1 vs. 2) were measured in 15 Mandarin-speaking children using CIs and agematched children with normal hearing (NH). In the discrimination task, only children using CIs exhibited significantly lower scores for consonant place contrasts compared to other contrasts, including lexical tones. In the word identification task, children using CIs showed lower performance for all contrasts compared to children with NH, but they both showed specific difficulties with tone 1 vs. 2 contrasts. This study suggests that Mandarin-speaking children using CIs are able to discriminate and identify lexical tones and, perhaps more surprisingly, have more difficulties when discriminating consonants.
Purpose: The aim of this study was to investigate infants' listening preference for emotional prosodies in spoken words and identify their acoustic correlates. Method: Forty-six 3- to-12-month-old infants ( M age = 7.6 months) completed a central fixation (or look-to-listen) paradigm in which four emotional prosodies (happy, sad, angry, and neutral) were presented. Infants' looking time to the string of words was recorded as a proxy of their listening attention. Five acoustic variables—mean fundamental frequency (F0), word duration, intensity variation, harmonics-to-noise ratio (HNR), and spectral centroid—were also analyzed to account for infants' attentiveness to each emotion. Results: Infants generally preferred affective over neutral prosody, with more listening attention to the happy and sad voices. Happy sounds with breathy voice quality (low HNR) and less brightness (low spectral centroid) maintained infants' attention more. Sad speech with shorter word duration (i.e., faster speech rate), less breathiness, and more brightness gained infants' attention more than happy speech did. Infants listened less to angry than to happy and sad prosodies, and none of the acoustic variables were associated with infants' listening interests in angry voices. Neutral words with a lower F0 attracted infants' attention more than those with a higher F0. Neither age nor sex effects were observed. Conclusions: This study provides evidence for infants' sensitivity to the prosodic patterns for the basic emotion categories in spoken words and how the acoustic properties of emotional speech may guide their attention. The results point to the need to study the interplay between early socioaffective and language development.
Developmental studies have shown strong evidence that socially enriched speech signal (including prosodic modifications) attracts infants’ attention and facilitates language development. While emotion understanding is evident at 9 months of age (Otte et al., 2015), the developmental trajectory of the emotional speech prosody perception is still unclear. The present study adopted a widely used preferential looking paradigm to measure 3- to 14-month-old infants’ listening preference to English words spoken in neutral, happy, angry, and sad tones. Analysis using a linear mixed model showed that infants’ preference of emotional prosody changed as a function of age. On average, the three-month olds listened longer to all emotional prosodies over the neutral one whereas older infants showed significantly diminished interests in the sad prosody, followed by the happy and angry voices. Around 12 months, infants appeared to listen to emotional prosodies equally with the exception of reduced interest in the angry prosody. These preferential listening measures were not correlated with the varying durations or fundamental frequencies of the spoken words for the different emotional categories, indicating that the development of emotional speech prosody is not purely driven by the acoustical properties but rather involves higher-order social cognition.
Purpose Spoken language is inherently multimodal and multidimensional in natural settings, but very little is known about how second language (L2) learners undertake multilayered speech signals with both phonetic and affective cues. This study investigated how late L2 learners undertake parallel processing of linguistic and affective information in the speech signal at behavioral and neurophysiological levels. Method Behavioral and event-related potential measures were taken in a selective cross-modal priming paradigm to examine how late L2 learners ( N = 24, M age = 25.54 years) assessed the congruency of phonetic (target vowel: /a/ or /i/) and emotional (target affect: happy or angry) information between the visual primes of facial pictures and the auditory targets of spoken syllables. Results Behavioral accuracy data showed a significant congruency effect in affective (but not phonetic) priming. Unlike a previous report on monolingual first language (L1) users, the L2 users showed no facilitation in reaction time for congruency detection in either selective priming task. The neurophysiological results revealed a robust N400 response that was stronger in the phonetic condition but without clear lateralization and that the N400 effect was weaker in late L2 listeners than in monolingual L1 listeners. Following the N400, late L2 learners showed a weaker late positive response than the monolingual L1 users, particularly in the left central to posterior electrode regions. Conclusions The results demonstrate distinct patterns of behavioral and neural processing of phonetic and affective information in L2 speech with reduced neural representations in both the N400 and the later processing stage, and they provide an impetus for further research on similarities and differences in L1 and L2 multisensory speech perception in bilingualism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.