JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. University of California Press is collaborating with JSTOR to digitize, preserve and extend access to Music Perception: An Interdisciplinary Journal. MUSICIANS OFTEN MAKE GESTURES and move their bodies expressing a musical intention. In order to explore to what extent emotional intentions can be conveyed through musicians' movements, participants watched and rated silent video clips of musicians performing the emotional intentions Happy, Sad, Angry, and Fearful. In the first experiment participants rated emotional expression and movement character of marimba performances. The results showed that the intentions Happiness, Sadness, and Anger were well communicated, whereas Fear was not. Showing selected parts of the player only slightly influenced the identification of the intended emotion. In the second experiment participants rated the same emotional intentions and movement character for performances on bassoon and soprano saxophone. The ratings from the second experiment confirmed that Fear was not communicated whereas Happiness, Sadness, and Anger were recognized. The rated movement cues were similar in the two experiments and were analogous to their audio counterpart in music performance.
We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos x three accents x nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions x nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.
When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound intensity and velocity of the drumming strike was maintained or eliminated. Prior to MRI scanning, each participant completed psychophysical testing to identify personal levels of synchronous and asynchronous timing to be used in the two fMRI activation tasks. In both experiments, the drummers' brain activation was reduced in motor and action representation brain regions when sound matched the observed movements, and was similar to that of novices when sound was mismatched. This reduction in neural activity occurred bilaterally in the cerebellum and left parahippocampal gyrus in Experiment 1, and in the right inferior parietal lobule, inferior temporal gyrus, middle frontal gyrus and precentral gyrus in Experiment 2. Our results indicate that brain functions in action-sound representation areas are modulated by multimodal action expertise.
Like all music performance, percussion playing requires high control over timing and sound properties. Specific to percussionists, however, is the need to adjust the movement to different instruments with varying physical properties and tactile feedback to the player. Furthermore, the well defined note onsets and short interaction times between player and instrument do not allow for much adjustment once a stroke is initiated. The paper surveys research that shows a close relationship between movement and sound production, and how playing conditions such as tempo and the rebound after impact affect the movements. Furthermore, I discuss differences in movement organization, and visual information from striking movements.
Virtuosity in music performance is often associated with fast, precise, and efficient sound-producing movements. The generation of such highly skilled movements involves complex joint and muscle control by the central nervous system, and depends on the ability to anticipate, segment, and coarticulate motor elements, all within the biomechanical constraints of the human body. When successful, such motor skill should lead to what we characterize as fluency in musical performance. Detecting typical features of fluency could be very useful for technology-enhanced learning systems, assisting and supporting students during their individual practice sessions by giving feedback and helping them to adopt sustainable movement patterns. In this study, we propose to assess fluency in musical performance as the ability to smoothly and efficiently coordinate while accurately performing slow, transitionary, and rapid movements. To this end, the movements of three cello players and three drummers at different levels of skill were recorded with an optical motion capture system, while a wireless electromyography (EMG) system recorded the corresponding muscle activity from relevant landmarks. We analyzed the kinematic and coarticulation characteristics of these recordings separately and then propose a combined model of fluency in musical performance predicting music sophistication. Results suggest that expert performers' movements are characterized by consistently smooth strokes and scaling of muscle phasic coactivation. The explored model of fluency as a function of movement smoothness and coarticulation patterns was shown to be limited by the sample size, but it serves as a proof of concept. Results from this study show the potential of a technology-enhanced objective measure of fluency in musical performance, which could lead to improved practices for aspiring musicians, instructors, and researchers.
In two experiments participants tuned a drum machine to their preferred dance tempo. Measurements of height, shoulder width, leg length, and weight were taken for each participant, and their sex recorded. Using a multiple regression analysis, height and leg length combined was found to be the best predictors of preferred dance tempo in Experiment 1. A second experiment, where males and females were matched in terms of height, resulted in no significant correlation between sex and preferred dance tempo. In the matched sample, height was found to be the single best predictor but with a relatively small effect size. These results are consistent with a biomechanical "resonance" model of dancing. Author notes:The authors have contributed to the article in the following way: Dahl is principal author and responsible for the major part of data collection and analysis; Huron is responsible for the idea of the study and main co-author; Brod contributed with data collection as experimenter in Hanover and participated in the writing process; Altenmüller contributed with medical and anatomical knowledge for the anthropometric measurements in Experiment 2 and participated in the writing process.
Whereas wind instrumentalists and string players have a continuous control of the acoustic sound parameters during playing, a percussionists' direct contact with the instrument is limited to a few milliseconds. The player has no possibilities to adjust grip or dampening during the actual contact. Whatever timbre and sound level the player is aiming for therefore has to be integrated in the entire striking gesture. How can the player control the complex interaction between drumstick and drumhead? In order to investigate how the players' grip and striking gestures influence the sound characteristics of drum strokes we recorded movements, audio, contact time and contact force during drumming. Different instructions were given with the intention to influence how the player's grip controlling the drumstick. "Normal" strokes were allowed to freely rebound from the drumhead. For "controlled" strokes the player was asked to control the ending position of the drumstick, stopping it as close as possible to the drumhead after the stroke. Preliminary analysis showed that the instructions influenced contact force, contact time, and perceptual ratings of the strokes. Further results and implications will be discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.