Group musical improvisation is thought to be akin to conversation, and therapeutically has been shown to be effective at improving communicativeness, sociability, creative expression, and overall psychological health. To understand these therapeutic effects, clarifying the nature of brain activity during improvisational cognition is important. Some insight regarding brain activity during improvisational music cognition has been gained via functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). However, we have found no reports based on magnetoencephalography (MEG). With the present study, we aimed to demonstrate the feasibility of improvisational music performance experimentation in MEG. We designed a novel MEG-compatible keyboard, and used it with experienced musicians (N = 13) in a music performance paradigm to spectral-spatially differentiate spontaneous brain activity during mental imagery of improvisational music performance. Analyses of source activity revealed that mental imagery of improvisational music performance induced greater theta (5–7 Hz) activity in left temporal areas associated with rhythm production and communication, greater alpha (8–12 Hz) activity in left premotor and parietal areas associated with sensorimotor integration, and less beta (15–29 Hz) activity in right frontal areas associated with inhibition control. These findings support the notion that musical improvisation is conversational, and suggest that creation of novel auditory content is facilitated by a more internally-directed, disinhibited cognitive state.
Mild Cognitive Impairment (MCI) is a border or precursor state of dementia. To optimize implemented interventions for MCI, it is essential to clarify the underlying neural mechanisms. However, knowledge regarding the brain regions responsible for MCI is still limited. Here, we implemented the Montreal Cognitive Assessment (MoCA) test, a screening tool for MCI, in 20 healthy elderly participants (mean age, 67.5 years), and then recorded magnetoencephalograms (MEG) while they performed a visual sequential memory task. In the task, each participant memorized the four possible directions of seven sequentially presented arrow images. Recall accuracy for beginning items of the memory sequence was significantly positively related with MoCA score. Meanwhile, MEG revealed stronger alphaband (8-13 Hz) rhythm desynchronization bilaterally in the precuneus (PCu) for higher MoCA (normal) participants. Most importantly, this PCu desynchronization response weakened in correspondence with lower MoCA score during the beginning of sequential memory encoding, a time period that should rely on working memory and be affected by declined cognitive function. Our results suggest that deactivation of the PCu is associated with early MCI, and corroborate pathophysiological findings based on post-mortem tissue which have implicated hypoperfusion of the PCu in early stages of Alzheimer disease. Our results indicate the possibility that cognitive decline can be detected early and non-invasively by monitoring PCu activity with electrophysiological methods.
The rapid rise of voice user interface technology has changed the way users traditionally interact with interfaces, as tasks requiring gestural or visual attention are swapped by vocal commands. This shift has equally affected designers, required to disregard common digital interface guidelines in order to adapt to non-visual user interaction (No-UI) methods. The guidelines regarding voice user interface evaluation are far from the maturity of those surrounding digital interface evaluation, resulting in a lack of consensus and clarity. Thus, we sought to contribute to the emerging literature regarding voice user interface evaluation and, consequently, assist user experience professionals in their quest to create optimal vocal experiences. To do so, we compared the effectiveness of physiological features (e.g., phasic electrodermal activity amplitude) and speech features (e.g., spectral slope amplitude) to predict the intensity of users’ emotional responses during voice user interface interactions. We performed a within-subjects experiment in which the speech, facial expression, and electrodermal activity responses of 16 participants were recorded during voice user interface interactions that were purposely designed to elicit frustration and shock, resulting in 188 analyzed interactions. Our results suggest that the physiological measure of facial expression and its extracted feature, automatic facial expression-based valence, is most informative of emotional events lived through voice user interface interactions. By comparing the unique effectiveness of each feature, theoretical and practical contributions may be noted, as the results contribute to voice user interface literature while providing key insights favoring efficient voice user interface evaluation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.