In the present paper, we report the results of an empirical study on the effects of cognitive load on operatic singing. The main aim of the study was to investigate to what extent a working memory task affected the timing of operatic singers' performance. Thereby, we focused on singers' tendency to speed up, or slow down their performance of musical phrases and pauses. Twelve professional operatic singers were asked to perform an operatic aria three times; once without an additional working memory task, once with a concurrent working memory task (counting shapes on a computer screen), and once with a relatively more difficult working memory task (more shapes to be counted appearing one after another). The results show that, in general, singers speeded up their performance under heightened cognitive load. Interestingly, this effect was more pronounced in pauses—more in particular longer pauses—compared to musical phrases. We discuss the role of sensorimotor control and feedback processes in musical timing to explain these findings.
The "One-Person Choir" is a human-computer interface for singers that facilitates gestural control over a digital signal processing (DSP) module for harmonizing the singing voice in real time (see Figure 1). Harmonization adds extra pitch-shifted voices that are tonally related to the input voice. The interface captures global movements of the upper limbs by means of an integrated network of inertial sensors attached to the upper body of a singer. From these data, gestural cues are extracted and compared with a preconfigured gestural model that has been trained with empirical data. When the gestures of the singer match the preconfigured model, it is possible to control the harmonization of the singing input voice captured by a microphone. Thus, the interface allows a singer to naturally enhance the expressive qualities of his or her voice with the assistance of expressive gestures connected to an electronic environment.The One-Person Choir can be integrated in interactive multimedia installations that exploit the expressive power of gestures in combination with singing. As will be argued in this article, installations illustrate, and elaborate on, an ongoing shift in contemporary electronic and electroacoustic music: the move from interactive systems (or hyperinstruments) to composing interactions (Di Scipio 2003).
The concepts of mediality and embodied music cognition are relevant theories to improve the efficiency of the design of technologically enhanced performance environments. This paper discusses (i) relevant theories that may be applicable for the analysis of gestures in professional operatic singing performances, and (ii) the resulting gestural mappings that might then be used for building a vocal augmentation tool. A methodology is presented integrating narrative analysis and iterative prototyping, based on gestural and performance data. Implementation of these theories should improve the efficiency and design of vocal augmentation in theatrical contexts; increasing generalizability, dramatic integration, and facilitating a more cohesive, contextualized performance. The present study demonstrates the potential application of the theories of mediality and embodied music cognition in the development of technological mediators, as well as possible dynamic mappings strategies based on gesture audio interaction and the physical realization of the performer's musical goals.
Musical training involves exposure to complex auditory and visual stimuli, memorization of elaborate sequences, and extensive motor rehearsal. It has been hypothesized that such multifaceted training may be associated with differences in basic cognitive functions, such as prediction, potentially translating to a facilitation in expert musicians. Moreover, such differences might generalize to non-auditory stimuli. This study was designed to test both hypotheses. We implemented a cross-modal attentional cueing task with auditory and visual stimuli, where a target was preceded by compatible or incompatible cues in mainly compatible (80% compatible, predictable) or random blocks (50% compatible, unpredictable). This allowed for the testing of prediction skills in musicians and controls. Musicians showed increased sensitivity to the statistical structure of the block, expressed as advantage for compatible trials (disadvantage for incompatible trials), but only in the mainly compatible (predictable) blocks. Controls did not show this pattern. The effect held within modalities (auditory, visual), across modalities, and when controlling for short-term memory capacity. These results reveal a striking enhancement in cross-modal prediction in musicians in a very basic cognitive task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.