Introduction:Slow-wave sleep (SWS) slow waves and sleep spindle activity have been shown to be crucial for memory consolidation. Recently, memory consolidation has been causally facilitated in human participants via auditory stimuli phase-locked to SWS slow waves.Aims:Here, we aimed to develop a new acoustic stimulus protocol to facilitate learning and to validate it using different memory tasks. Most importantly, the stimulation setup was automated to be applicable for ambulatory home use.Methods:Fifteen healthy participants slept 3 nights in the laboratory. Learning was tested with 4 memory tasks (word pairs, serial finger tapping, picture recognition, and face-name association). Additional questionnaires addressed subjective sleep quality and overnight changes in mood. During the stimulus night, auditory stimuli were adjusted and targeted by an unsupervised algorithm to be phase-locked to the negative peak of slow waves in SWS. During the control night no sounds were presented.Results:Results showed that the sound stimulation increased both slow wave (p = .002) and sleep spindle activity (p < .001). When overnight improvement of memory performance was compared between stimulus and control nights, we found a significant effect in word pair task but not in other memory tasks. The stimulation did not affect sleep structure or subjective sleep quality.Conclusions:We showed that the memory effect of the SWS-targeted individually triggered single-sound stimulation is specific to verbal associative memory. Moreover, the ambulatory and automated sound stimulus setup was promising and allows for a broad range of potential follow-up studies in the future.
The spatiotemporal dynamics of the neural processing of spoken morphologically complex words are still an open issue. In the current study, we investigated the time course and neural sources of spoken inflected and derived words using simultaneously recorded electroencephalography (EEG) and magnetoencephalography (MEG) responses. Ten participants (native speakers) listened to inflected, derived, and monomorphemic Finnish words and judged their acceptability. EEG and MEG responses were time-locked to both the stimulus onset and the critical point (suffix onset for complex words, uniqueness point for monomorphemic words). The ERP results showed that inflected words elicited a larger left-lateralized negativity than derived and monomorphemic words approximately 200 ms after the critical point. Source modeling of MEG responses showed one bilateral source in the superior temporal area ∼100 ms after the critical point, with derived words eliciting stronger source amplitudes than inflected and monomorphemic words in the right hemisphere. Source modeling also showed two sources in the temporal cortex approximately 200 ms after the critical point. There, inflected words showed a more systematic pattern in source locations and elicited temporally distinct source activity in comparison to the derived word condition. The current results provide electrophysiological evidence for at least partially distinct cortical processing of spoken inflected and derived words. In general, the results support models of morphological processing stating that during the recognition of inflected words, the constituent morphemes are accessed separately. With regard to derived words, stem and suffix morphemes might be at least initially activated along with the whole word representation.
Informal music activities such as singing may lead to augmented auditory perception and attention. In order to study the accuracy and development of music-related sound change detection in children with cochlear implants (CIs) and normal hearing (NH) aged 4–13 years, we recorded their auditory event-related potentials twice (at T1 and T2, 14–17 months apart). We compared their MMN (preattentive discrimination) and P3a (attention toward salient sounds) to changes in piano tone pitch, timbre, duration, and gaps. Of particular interest was to determine whether singing can facilitate auditory perception and attention of CI children. It was found that, compared to the NH group, the CI group had smaller and later timbre P3a and later pitch P3a, implying degraded discrimination and attention shift. Duration MMN became larger from T1 to T2 only in the NH group. The development of response patterns for duration and gap changes were not similar in the CI and NH groups. Importantly, CI singers had enhanced or rapidly developing P3a or P3a-like responses over all change types. In contrast, CI non-singers had rapidly enlarging pitch MMN without enlargement of P3a, and their timbre P3a became smaller and later over time. These novel results show interplay between MMN, P3a, brain development, cochlear implantation, and singing. They imply an augmented development of neural networks for attention and more accurate neural discrimination associated with singing. In future studies, differential development of P3a between CI and NH children should be taken into account in comparisons of these groups. Moreover, further studies are needed to assess whether singing enhances auditory perception and attention of children with CIs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.